The machine psychology of cooperation: can GPT models operationalise prompts for altruism, cooperation, competitiveness, and selfishness in economic games?
Article
Phelps, S. and Russell, Y. 2025. The machine psychology of cooperation: can GPT models operationalise prompts for altruism, cooperation, competitiveness, and selfishness in economic games? Journal of Physics: Complexity. https://doi.org/10.1088/2632-072X/ada711
Type | Article |
---|---|
Title | The machine psychology of cooperation: can GPT models operationalise prompts for altruism, cooperation, competitiveness, and selfishness in economic games? |
Authors | Phelps, S. and Russell, Y. |
Abstract | Large language models (LLMs) are capable of playing the "human" role as participants in economic games. We investigated the capability of GPT-3.5 to play the one-shot Dictator Game (DG) and the repeated Prisoner's Dilemma game (PDG), the latter of which introduced tit-for-tat scenarios. In particular, we investigated whether the LLMs could be prompted to play in accordance to five roles ("personalities") assigned prior to game play: the five "simulacra" were: (1) cooperative, (2) competitive, (3) altruistic, (4) selfish, and (5) control, all of which were natural language descriptions ("ruthless equities trader...", "selfless philanthropist...", etc.). We predicted that the LLM-participant would play in accordance to the semantic content of the prompt (ruthless would play ruthlessly, etc.). Across five simulacra (roles), we tested the AI equivalent of 450 human participants (32,400 observations in total, qua counterbalancing and re-testability). Using a general linear mixed model (GLMM) for the PDG, and a cumulative link mixed model (CLMM) for the DG, we found that level of cooperation/donation followed the general pattern of altruistic ≥ cooperative > control > selfish ≥ competitive. We proposed ten hypotheses, three of which were convincingly supported: cooperative/altruistic did cooperate more than competitive/selfish; cooperation was higher in repeated games (PDG); cooperative/altruistic were sensitive to the opponent's behaviour in repeated games. We also found some variation among the three versions of GPT-3.5 we used. Our study demonstrates the potential of using prompt engineering for LLM-chatbots to study the mechanisms of cooperation in both real and artificial worlds. |
Sustainable Development Goals | 9 Industry, innovation and infrastructure |
Middlesex University Theme | Creativity, Culture & Enterprise |
Publisher | IOP Publishing |
Journal | Journal of Physics: Complexity |
ISSN | |
Electronic | 2632-072X |
Publication process dates | |
Accepted | 07 Jan 2025 |
Deposited | 15 Jan 2025 |
Output status | Accepted |
Accepted author manuscript | License File Access Level Open |
Copyright Statement | As the Version of Record of this article is going to be / has been published on a gold open access basis under a CC BY 4.0 licence, this Accepted Manuscript is available for reuse under a CC BY 4.0 licence immediately. |
Digital Object Identifier (DOI) | https://doi.org/10.1088/2632-072X/ada711 |
Related Output | |
Is new version of | The machine psychology of cooperation: can GPT models operationalise prompts for altruism, cooperation, competitiveness and selfishness in economic games? |
Language | English |
https://repository.mdx.ac.uk/item/1z0837
Download files
Accepted author manuscript
Phelps+et+al_2025_J._Phys._Complex._10.1088_2632-072X_ada711.pdf | ||
License: CC BY 4.0 | ||
File access level: Open |
6
total views0
total downloads6
views this month0
downloads this month