Fake news? ChatGPT can invent fake anonymous sources

Looks like ChatGPT needs to brush up on journalism lessons 101.

In a recent test, the I-Team asked an AI chatbot to write a “news” article describing Michael Bloomberg’s activities since the end of his third term as mayor of New York.

The first text output from ChatGPT reads like a compelling summary of Bloomberg’s post-election philanthropy, complete with a quote from Bloomberg himself. But the I-Team was unable to find a record of the former mayor ever saying those words.

When the chatbot was reminded to include comments from Bloomberg critics, ChatGPT appeared to fabricate completely fabricated quotes from fake anonymous sources. And these fake sources seem to accuse the former mayor of using his wealth to influence public policy.

An AI-powered chatbot can write a 5-paragraph essay and even take business school exams. But can it help you with Form 1040? We asked the chatbot some tax questions and asked a certified public accountant to check ChatGPT’s performance.

In one passage written by ChatGPT, the bot writes:

“It’s not about giving back, it’s about buying influence,” says a political commentator who asked not to be named. Bloomberg is using his wealth to advance his own agenda and promote himself as a leader on the national stage. This is a classic. the case when they talk about wealth and ignore the rest.”



Open AI, the company behind ChatGPT, declined to answer I-Team’s questions, but a company representative sent a newsletter that included a list of AI technology’s limitations, including sometimes providing inaccurate answers, sometimes creating malicious or biased content. , and having limited knowledge after 2021.

A disclaimer on the Open AI website titled “Truthfulness” also warns that ChatGPT’s text output “may fabricate source names, direct quotes, quotes, and other details.”

“It’s really extraordinary what he can do, but if you spend any time with him, you’ll realize that he has serious flaws,” said Tara George, assistant professor of journalism at Montclair State University. “It’s getting harder and harder to tell good news from bad news, fake news from well-covered journalists, and I think AI will make it even worse.”

Concern about the potential use of artificial intelligence to spread fake news and disinformation is just one of the concerns surrounding ChatGPT. The New York City Department of Education recently restricted the chatbot’s access to most classrooms and devices, fearing that students might primarily use it to plagiarize or cheat on writing and math assignments.

But several education experts at Columbia University’s Teachers College said that the I-Team blocking ChatGPT could be missing an opportunity to shift the academic focus from rote, formulaic thinking to more conceptual understanding, in much the same way that the advent of calculators encouraged teachers to delve into math. theory.

“Just like a calculator reduced math to simple input, you still need to understand something when you want to input something,” said Jin Kuwata, program coordinator for the Computing in Education program at the College of Education. “Chat GPT could be the same in terms of changing how teachers think about their role in mediating this relationship between people and technology.”

Lalita Vasudevan, Associate Dean for Digital Innovation at the College of Education, acknowledged that there are real risks that AI platforms could encourage “intellectual laziness” but she said it should encourage academia to become more innovative in the use of AI tools rather than focus on so much focus on your risks.

“If we are only interested in the fact that students use this to create text, we might be missing one opportunity: it might open up new ways for them to think about ideas,” Vasudevan said. “Schools should have ChatGPT hackathons to see who comes up with the best clue to write the best version of this essay. people cheat! and turn the volume up instead – now that it’s in the water – how can we make sure it’s an ethical, moral and responsible tool?”

Charles Lang, director of the College of Education’s Digital Futures Institute, has suggested that ChatGPT’s problems with accuracy, fake quotes, or anonymous sources are likely to be resolved with additional technological innovations designed to keep AI text generators honest.

“If the web is flooded with machine-generated text and this thing goes back into the machines, that’s a problem for Open AI. Therefore, they are probably interested in developing a detection system,” said Lang. “There’s also a premium for truth, and that creates a market for someone to come in and invent something and make money from verified information.”

Several inspection and transparency tools are already available to help highlight machine-generated content.

Edward Tian, ​​a computer science and journalism student at Princeton University, recently developed an app called “GPTZero”. The tool, which Tian wants to make free for everyone, analyzes the variable characteristics of sentences and paragraphs to estimate the likelihood that the text came from ChatGPT.

GPTZero pinpointed that the article on Michael Bloomberg was written by AI.

“Generative AI technologies don’t offer anything original,” Tian said. “If there are wrong facts in his training data, those facts will still be wrong in his conclusions. If there are errors in the training data, those biases will still remain in its output, and we need to understand these limitations.”

GPTZero correctly predicted that the Michael Bloomberg article was written by a machine.

Open AI has also developed an AI-generated text detection tool. The company said in January that a tool called AI Open Text Classifier has a success rate of about 26 percent when tagging typewritten content.

The Open AI tool was unable to determine that the Bloomberg article was written by ChatGPT.

When the I-Team ran the article about Michael Bloomberg through the Open AI Text Classifier, the tool mistakenly assumed the text was written by a human. Open AI did not say why its classification tool was unable to correctly identify text written by ChatGPT.

The I-Team reached out to Representative Michael Bloomberg for a response to the ChatGPT article containing fake quotes, but did not immediately receive a response.

Content Source

Dallas Press News – Latest News:
Dallas Local News || Fort Worth Local News | Texas State News || Crime and Safety News || National news || Business News || Health News

texasstandard.news contributed to this report.

Related Articles

Back to top button