The machine psychology of cooperation: can GPT models operationalise prompts for altruism, cooperation, competitiveness, and selfishness in economic games?
Article
Phelps, S. and Russell, Y. 2025. The machine psychology of cooperation: can GPT models operationalise prompts for altruism, cooperation, competitiveness, and selfishness in economic games? Journal of Physics: Complexity. 6 (1). https://doi.org/10.1088/2632-072X/ada711
Type | Article |
---|---|
Title | The machine psychology of cooperation: can GPT models operationalise prompts for altruism, cooperation, competitiveness, and selfishness in economic games? |
Authors | Phelps, S. and Russell, Y. |
Abstract | Large language models (LLMs) are capable of playing the "human" role as participants in economic games. We investigated the capability of GPT-3.5 to play the one-shot Dictator Game (DG) and the repeated Prisoner's Dilemma game (PDG), the latter of which introduced tit-for-tat scenarios. In particular, we investigated whether the LLMs could be prompted to play in accordance to five roles ("personalities") assigned prior to game play: the five "simulacra" were: (1) cooperative, (2) competitive, (3) altruistic, (4) selfish, and (5) control, all of which were natural language descriptions ("ruthless equities trader...", "selfless philanthropist...", etc.). We predicted that the LLM-participant would play in accordance to the semantic content of the prompt (ruthless would play ruthlessly, etc.). Across five simulacra (roles), we tested the AI equivalent of 450 human participants (32,400 observations in total, qua counterbalancing and re-testability). Using a general linear mixed model (GLMM) for the PDG, and a cumulative link mixed model (CLMM) for the DG, we found that level of cooperation/donation followed the general pattern of altruistic ≥ cooperative > control > selfish ≥ competitive. We proposed ten hypotheses, three of which were convincingly supported: cooperative/altruistic did cooperate more than competitive/selfish; cooperation was higher in repeated games (PDG); cooperative/altruistic were sensitive to the opponent's behaviour in repeated games. We also found some variation among the three versions of GPT-3.5 we used. Our study demonstrates the potential of using prompt engineering for LLM-chatbots to study the mechanisms of cooperation in both real and artificial worlds. |
Keywords | artificial intelligence; machine psychology; cooperation; dictator game; prisoner's dilemma; simulacra |
Sustainable Development Goals | 9 Industry, innovation and infrastructure |
Middlesex University Theme | Creativity, Culture & Enterprise |
Publisher | IOP Publishing |
Journal | Journal of Physics: Complexity |
ISSN | |
Electronic | 2632-072X |
Publication dates | |
01 Mar 2025 | |
Online | 24 Mar 2025 |
Publication process dates | |
Accepted | 07 Jan 2025 |
Deposited | 15 Jan 2025 |
Output status | Published |
Publisher's version | License File Access Level Open |
Copyright Statement | Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. |
Digital Object Identifier (DOI) | https://doi.org/10.1088/2632-072X/ada711 |
Web of Science identifier | WOS:001451120200001 |
Related Output | |
Is new version of | The machine psychology of cooperation: can GPT models operationalise prompts for altruism, cooperation, competitiveness and selfishness in economic games? |
Language | English |
https://repository.mdx.ac.uk/item/1z0837
Download files
Publisher's version
Phelps_2025_J._Phys._Complex._6_015018.pdf | ||
License: CC BY 4.0 | ||
File access level: Open |
30
total views14
total downloads4
views this month3
downloads this month