Fatal errors: ChatGPT far from infallible

April 12, 2023

Probable answers sometimes wrong according to University of Southern California study

Texts generated by the artificial intelligence (AI) ChatGPT are currently still highly error-prone, as a study by the University of Southern California (https://www.usc.edu) led by Mayank Kejriwal and engineering student Zhisheng Tang shows. They tested ChatGPT and other AI-based systems for their ability to work rationally.

Huge data sets as a basis

ChatGPT relies on existing basic components to formulate texts. It “learns” from huge data sets spread across the internet and delivers what is statistically most likely to be correct. “Despite their impressive capabilities, large language models don’t really think. They tend to make elementary mistakes and even make things up. However, because they produce fluent language, people tend to believe that they can think,” says Kejriwal.

This, Kejriwal and Tang say, led them to investigate the supposed cognitive abilities of the models – work that has gained importance now that such text-creation models are widely available. They have defined computational rationality as the ability to choose the one that is closest to the truth, or hit it spot on, given different possible solutions. According to the scientists, they did not find this rationality in ChatGPT in many cases.

Innocent professor in the pillory

A case uncovered by the “Washington Post” is particularly glaring. As part of a research study, a lawyer in California had asked ChatGPT to compile a list of legal scholars who had sexually harassed someone. Law professor Jonathan Turley’s name also appeared on the list. He had allegedly made sexually suggestive comments and attempted to indecently touch a student during a class trip to Alaska. The AI “cited” a March 2018 article in the Washington Post as its source. But no such article exists. Nor did the class trip referred to ever take place. Where ChatGPT got the info from could not be reconstructed.

“It’s a very specific combination of facts and falsehoods that make these systems quite dangerous,” says Kate Crawford, a professor at the University of Southern California who is herself affected. She was recently contacted by a journalist who was using ChatGPT to research sources for a story, she says. The bot suggested Crawford and offered examples of her relevant work, including an article title, a publication date and citations. Everything sounded plausible – and everything was fake.

Related Articles

Commentary: BERLIN – Known risks, familiar words, familiar failures

The power outage in Berlin since 3 January 2026 is extraordinary in its scale, but remarkably familiar in its causes and political consequences. Five damaged high-voltage cables, tens of thousands of households without electricity and heating, restrictions on mobile...

Commentary: Hesse’s clear stance against left-wing extremism

In his statement, Hesse's Interior Minister Roman Poseck paints a deliberately clear picture of left-wing extremism as a threat to security. The core of his position is clear: left-wing extremism is not understood as a marginal phenomenon or merely a side issue of...

Positive safety record at Bavaria’s Christmas markets

Successful protection concepts combining presence, prevention and cooperation At the end of the 2025 Christmas market season, the Bavarian State Ministry of the Interior reports a thoroughly positive safety record. Home Secretary Joachim Herrmann spoke of...

Share This