Fatal errors: ChatGPT far from infallible

April 12, 2023

Probable answers sometimes wrong according to University of Southern California study

Texts generated by the artificial intelligence (AI) ChatGPT are currently still highly error-prone, as a study by the University of Southern California (https://www.usc.edu) led by Mayank Kejriwal and engineering student Zhisheng Tang shows. They tested ChatGPT and other AI-based systems for their ability to work rationally.

Huge data sets as a basis

ChatGPT relies on existing basic components to formulate texts. It “learns” from huge data sets spread across the internet and delivers what is statistically most likely to be correct. “Despite their impressive capabilities, large language models don’t really think. They tend to make elementary mistakes and even make things up. However, because they produce fluent language, people tend to believe that they can think,” says Kejriwal.

This, Kejriwal and Tang say, led them to investigate the supposed cognitive abilities of the models – work that has gained importance now that such text-creation models are widely available. They have defined computational rationality as the ability to choose the one that is closest to the truth, or hit it spot on, given different possible solutions. According to the scientists, they did not find this rationality in ChatGPT in many cases.

Innocent professor in the pillory

A case uncovered by the “Washington Post” is particularly glaring. As part of a research study, a lawyer in California had asked ChatGPT to compile a list of legal scholars who had sexually harassed someone. Law professor Jonathan Turley’s name also appeared on the list. He had allegedly made sexually suggestive comments and attempted to indecently touch a student during a class trip to Alaska. The AI “cited” a March 2018 article in the Washington Post as its source. But no such article exists. Nor did the class trip referred to ever take place. Where ChatGPT got the info from could not be reconstructed.

“It’s a very specific combination of facts and falsehoods that make these systems quite dangerous,” says Kate Crawford, a professor at the University of Southern California who is herself affected. She was recently contacted by a journalist who was using ChatGPT to research sources for a story, she says. The bot suggested Crawford and offered examples of her relevant work, including an article title, a publication date and citations. Everything sounded plausible – and everything was fake.

Related Articles

Mobile phone usage at Oktoberfest remains at record levels

Mobile phone usage at Oktoberfest remains at record levels

Over ten percent more data traffic than in the same period last year Virtually no dropped calls French visitors jump to third place in guest rankings The weather during the first week of Oktoberfest was cold and rainy. That didn't hurt cell phone usage. Compared to...

Free meals are the strongest motivator

According to a study by the University of South Florida, employees value fitness and health less Employees who have direct contact with customers, such as cashiers or salespeople, are more likely to be motivated by perks such as free meals and excursions than by free...

Share This