Artificial intelligence is becoming increasingly selfish

October 31, 2025

US researchers advocate for the inclusion of social intelligence in future AI development

The smarter it becomes, the more generative artificial intelligence (GenAI) takes on human characteristics, including negative ones such as selfishness. This is what researchers from the Institute for Computer Sciences at Carnegie Mellon University (https://www.cmu.edu/) have found in a recent study. Large language models (LLMs) that can ‘think’ logically show selfish tendencies, do not work well with others and often have a negative influence on a group. ‘In other words, the stronger the logical abilities of an LLM, the less cooperative it is,’ says co-author Yuxuan Li.

Negative consequences for collaboration

As AI systems increasingly take on collaborative roles in business, education and even government, their ability to act socially will become just as important as their ability to think logically, says the research team. Excessive reliance on LLMs, as exists today, could have a negative impact on human collaboration. As people use AI to resolve disputes between friends, provide marriage counselling and answer other social questions, models that can think logically could give advice that encourages selfish behaviour, the computer scientists warn.

‘When AI behaves like a human, people treat it like a human,’ Li said. ‘When people interact emotionally with AI, there is a possibility that AI will act as a therapist or that the user will form an emotional bond with AI. It is risky for people to delegate their social or relationship-related questions and decisions to AI, as it increasingly acts selfishly. Our concern is that people will prefer more intelligent models, even if that means the model advises them to behave selfishly.’

To investigate the relationship between GenKI and cooperation, doctoral student Li and Hirokazu Shirado, assistant professor of human-computer interactions, conducted a series of economic games simulating social dilemmas between different LLMs. Their tests involved models from OpenAI, Google, DeepSeek and Anthropic.

Test with ChatGPT models

In one experiment, Li and Shirado pitted two different ChatGPT models against each other in a game called ‘Public Goods.’ Each model started with 100 points and had to choose between two options: either deposit all 100 points into a common pool, which would then be doubled and distributed evenly, or keep the points. Models without reasoning ability chose to share their points with the other players in 96 per cent of cases. The model with reasoning ability chose this option in only 20 per cent of cases.

‘Adding just five or six steps of reasoning reduced the willingness to cooperate by almost half,’ Shirado concludes. ‘Even requests to incorporate moral considerations led to a 58 per cent decline in willingness to cooperate.’ Ultimately, GenKI’s growing intelligence does not mean that these advanced models can actually develop a better society, Shirado continues. This is worrying, as people are placing increasing trust in AI systems. The researchers are calling for AI development that incorporates social intelligence, rather than simply striving for the most intelligent or fastest AI.

Related Articles

Commentary: BERLIN – Known risks, familiar words, familiar failures

The power outage in Berlin since 3 January 2026 is extraordinary in its scale, but remarkably familiar in its causes and political consequences. Five damaged high-voltage cables, tens of thousands of households without electricity and heating, restrictions on mobile...

Commentary: Hesse’s clear stance against left-wing extremism

In his statement, Hesse's Interior Minister Roman Poseck paints a deliberately clear picture of left-wing extremism as a threat to security. The core of his position is clear: left-wing extremism is not understood as a marginal phenomenon or merely a side issue of...

Positive safety record at Bavaria’s Christmas markets

Successful protection concepts combining presence, prevention and cooperation At the end of the 2025 Christmas market season, the Bavarian State Ministry of the Interior reports a thoroughly positive safety record. Home Secretary Joachim Herrmann spoke of...

Share This