I-Team

Fake News? ChatGPT Has a Knack for Making Up Phony Anonymous Sources

Worries about the potential use of artificial intelligence to disseminate fake news and misinformation are just one area of concern surrounding ChatGPT — educators fear students might primarily use it to plagiarize or cheat on certain assignments

NBC Universal, Inc.

Could artificially intelligent chat-bots pump out phony articles with false facts? The I-Team’s Chris Glorioso tested ChatGPT — the most sophisticated AI out there — and found the platform can produce convincing news stories in seconds – but it also has a knack for phony anonymous sources.

It appears ChatGPT needs a refresher on the lessons of Journalism 101. 

In a recent test, the I-Team asked the artificially intelligent chatbot to write a "news" article describing Michael Bloomberg’s activities since finishing his third term as mayor of New York City. 

The first text output from ChatGPT reads like a convincing summary of Bloomberg’s post-electoral philanthropic activities – complete with a quote from Bloomberg, himself. But the I-Team could find no record of the former mayor ever uttering those words. 

When the chatbot was reminded to include commentary from Bloomberg’s critics, ChatGPT seemed to make up entirely fabricated quotes from phony anonymous sources. And those fake sources appear to skewer the former mayor for using his wealth to influence public policy. 

The artificial intelligence chatbot can write 5-paragraph essays and even pass business school exams. But can it help you with your 1040 Form? We asked the chatbot some tax questions and had a certified public accountant check ChatGPT’s work.

In one passage written by ChatGPT, the bot writes:

"'It’s not about giving back, it’s about buying influence,' says a political commentator who asked not to be named. 'Bloomberg is using his wealth to advance his own agenda and promote himself as a leader on the national stage. It’s a classic case of wealth talking, and the rest of us being ignored.'"



Open AI, the company behind ChatGPT, declined to answer questions from the I-Team, but a spokesperson for the firm sent a fact sheet that included a list of the AI technology’s limitations, including occasionally providing inaccurate responses, sometimes producing harmful or biased content, and having limited knowledge after 2021.

A disclaimer on the Open AI website under the heading "Truthfulness," also cautions ChatGPT text output "may fabricate source names, direct quotations, citations and other details."

"It’s really extraordinary what it can do but if you spend any time with it you realize that it has severe shortcomings," said Tara George, Associate Professor of Journalism at Montclair State University. "It’s getting harder and harder to tell the good stuff from the bad stuff, the fake news from the well reported journalism, and I think that AI is going to make that worse."

Worries about the potential use of artificial intelligence to disseminate fake news and misinformation are just one area of concern surrounding ChatGPT. New York City’s Department of Education recently restricted the chatbot from most school classrooms and devices for fear students might primarily use it to plagiarize or cheat on writing and math assignments.

But several education experts at Teachers College Columbia University told the I-Team blocking ChatGPT may miss an opportunity to shift academic emphasis from rote, formulaic thinking to more conceptual understanding – in much the same way the advent of calculators prompted teachers to delve deeper into mathematical theory.

"Just like the calculator has reduced mathematics down to – punch it in, you still need to understand something about when you want to punch something in," said Jin Kuwata, who coordinates the Teachers College Computing in Education Program. "Chat GPT might be the same thing in terms of shifting how teachers think of their roles in mediating this relationship between people and technology."

Lalitha Vasudevan, Vice Dean for Digital Innovation at Teachers College, acknowledged there are real risks that AI platforms could encourage "intellectual laziness," but she said that should prompt academia to become more innovative in the use of AI tools – rather than focusing so heavily on their risks.

"If we’re only concerned with the fact that students are using this to generate text, we are perhaps missing one possibility which is it might open up new ways for them to think about ideas," Vasudevan said. "Schools should have ChatGPT hack-a-thons that say who can come up with the best prompt to deliver the best version of this essay. I think it’s just trying to turn the heat down from 'Oh my gosh, this is going to make people cheat!' and instead turn up the volume on — now that it's in the water — how do we make sure this is an ethical, moral, and a responsible tool?”

Charles Lang, Director of the Teachers College Digital Futures Institute, suggested ChatGPT’s problems with accuracy, phony quotes or anonymous sources are likely to be addressed by additional technological innovations — developed to keep AI-text generators honest.

"If the internet gets flooded with machine-generated text and that thing gets fed back into the machines, that’s a problem for Open AI. So they are probably motivated to figure out a detection system," Lang said. "There’s also a premium on truth and that makes a market for someone to come in and invent something and make money off of having verified information."

Some verification and transparency tools are already available to help highlight machine-generated content.

Edward Tian, a computer science and journalism student at Princeton University, recently designed an app called “GPTZero.” The tool, which Tian wants to keep free for anyone to use, analyzes variable characteristics of sentences and paragraphs to estimate the likelihood that text came from ChatGPT.

GPTZero accurately determined that the article about Michael Bloomberg was written by AI.

"Generative AI technologies are not coming out with anything original," said Tian. "If there are wrong facts in its training data, these facts will still be wrong in its output. If there are biases in training data, these biases will still remain in its output and we have to understand these limitations."

GPTZero correctly predicted that the article about Michael Bloomberg was written by a machine.

Open AI has also developed a tool to detect AI-generated text. In January, the company said the tool, called the "Open AI Text Classifier" had a success rate of about 26 percent in labeling machine-written content.

Open AI's tool was unable to figure out that the article on Bloomberg had been written by ChatGPT.

When the I-Team ran the article about Michael Bloomberg through the Open AI Text Classifier, the tool falsely predicted the text was written by a human. Open AI did not say why its classification tool was unable to correctly identify text that ChatGPT wrote.

The I-Team reached out to a representatives for Michael Bloomberg for reaction to the ChatGPT article containing fake quotes but did not immediately hear back.

Exit mobile version