Artificial intelligence is becoming increasingly selfish

October 31, 2025

US researchers advocate for the inclusion of social intelligence in future AI development

The smarter it becomes, the more generative artificial intelligence (GenAI) takes on human characteristics, including negative ones such as selfishness. This is what researchers from the Institute for Computer Sciences at Carnegie Mellon University (https://www.cmu.edu/) have found in a recent study. Large language models (LLMs) that can ‘think’ logically show selfish tendencies, do not work well with others and often have a negative influence on a group. ‘In other words, the stronger the logical abilities of an LLM, the less cooperative it is,’ says co-author Yuxuan Li.

Negative consequences for collaboration

As AI systems increasingly take on collaborative roles in business, education and even government, their ability to act socially will become just as important as their ability to think logically, says the research team. Excessive reliance on LLMs, as exists today, could have a negative impact on human collaboration. As people use AI to resolve disputes between friends, provide marriage counselling and answer other social questions, models that can think logically could give advice that encourages selfish behaviour, the computer scientists warn.

‘When AI behaves like a human, people treat it like a human,’ Li said. ‘When people interact emotionally with AI, there is a possibility that AI will act as a therapist or that the user will form an emotional bond with AI. It is risky for people to delegate their social or relationship-related questions and decisions to AI, as it increasingly acts selfishly. Our concern is that people will prefer more intelligent models, even if that means the model advises them to behave selfishly.’

To investigate the relationship between GenKI and cooperation, doctoral student Li and Hirokazu Shirado, assistant professor of human-computer interactions, conducted a series of economic games simulating social dilemmas between different LLMs. Their tests involved models from OpenAI, Google, DeepSeek and Anthropic.

Test with ChatGPT models

In one experiment, Li and Shirado pitted two different ChatGPT models against each other in a game called ‘Public Goods.’ Each model started with 100 points and had to choose between two options: either deposit all 100 points into a common pool, which would then be doubled and distributed evenly, or keep the points. Models without reasoning ability chose to share their points with the other players in 96 per cent of cases. The model with reasoning ability chose this option in only 20 per cent of cases.

‘Adding just five or six steps of reasoning reduced the willingness to cooperate by almost half,’ Shirado concludes. ‘Even requests to incorporate moral considerations led to a 58 per cent decline in willingness to cooperate.’ Ultimately, GenKI’s growing intelligence does not mean that these advanced models can actually develop a better society, Shirado continues. This is worrying, as people are placing increasing trust in AI systems. The researchers are calling for AI development that incorporates social intelligence, rather than simply striving for the most intelligent or fastest AI.

Related Articles

Digital workplace monitoring is becoming increasingly popular

Digital workplace monitoring is becoming increasingly popular

British research team presents concept for integrating relevant technology Employers are increasingly using wearable technologies to monitor their employees. The advantages and disadvantages of digital monitoring are currently the subject of heated debate. Researchers...

Germany: Herrmann: ‘Disaster control in Bavaria is well organised’

Heavy rain and flood management in focus at “Heavy Rain Innovation Day” Bavaria's Minister of the Interior and Disaster Control, Joachim Herrmann, emphasised today at the “Heavy Rain Innovation Day” in Munich, an information event organised by Deutsche Telekom:...

Share This