Student Uses ChatGPT to Cheat in AI Ethics Course – NBC Bay Area

The NBC Bay Area Investigation Unit surveyed major school districts and universities in Silicon Valley, home to nearly half a million students, to find out how educators are using new advances in artificial intelligence that have been criticized for fraud and misinformation. Senior investigative journalist Bigad Shaban reports this.

Advances in artificial intelligence, including the ChatGPT website, are forcing school districts and Silicon Valley universities to rethink how they teach and test students. The NBC Bay Area Investigation Unit surveyed the largest schools in nine Bay Area counties, which have nearly half a million students, and found that most schools recently held internal meetings to discuss the impact of artificial intelligence on their classrooms.

ChatGPT, launched less than three months ago, is known as a “chatbot” capable of providing high-level written responses to a wide range of queries. It can create essays, scripts, speeches, jokes, poetry, and even complex business plans. The site searches the web for answers and then uses artificial intelligence to formulate high-level answers. ChatGPT’s self-proclaimed tendency to generate “incorrect information” from time to time, as well as persistent concerns that the technology could be misused and encourage widespread fraud, have led some educators to ban access to the site.

Educators are trying to get rid of the “temptation to cheat” created by ChatGPT

“If you’re a student who hasn’t prepared for a lesson and you only have five minutes left, what are you going to do,” said Brian Green, a professor at Santa Clara University who teaches an ethics course for engineering students. . “It will be a great temptation.”

For nearly a decade, Greene has featured his students’ essays as a major component of their final grades. However, this semester, he requires his students to make oral presentations in person, rather than allowing them to complete written assignments at home.

“I’m trying to sort of get rid of the temptation,” said Green, who heads the technology ethics program at the university. “It raises some very fundamental questions about what the education system does, how it works, and how it should function in society.”

Green’s fears are not hypothetical. He believes one of his students used ChatGPT to write an essay, which he then tried to pass off as his own in class—essentially using artificial intelligence to cheat in Green’s Ethics in Artificial Intelligence course.

“The irony is very obvious,” Greene said with a smile. “[The essay] was off-topic, and also, to be honest, there was something very robotic about it.


[The essay] was off-topic, and also, to be honest, there was something very robotic about it.

Brian Greene, a professor at Santa Clara University, discusses what led him to believe that one of his students was using ChatGPT to cheat on a school assignment.


Green recently helped bring together a group of faculty on campus to discuss current concerns about ChatGPT and its potential negative impact on education.

“This is one of my biggest fears about this technology,” one of the professors remarked during a discussion. “We are very concerned about this,” said another.

Professor Brian Greene, director of technology ethics at Santa Clara University, has decided to ditch homework due to concerns that students may now rely on AI-powered websites to instantly generate answers to homework assignments.

In addition to Santa Clara University, Berkeley, Stanford, San Francisco State, and San Jose State also recently held internal meetings to explore the implications of ChatGPT, including what it could mean for their more than 130,000 students. However, none of them prevent access to the website.

The NBC Bay Area Investigative Unit also surveyed the ten largest school districts in the Bay Area, home to over 300,000 students. In 70% of these districts, the impact of ChatGPT was discussed, and 30% had already blocked the site on school computers.

“We really felt like we needed to explore it a little more before we say, ‘Let’s open it up for people to use,'” said Dr. Sheila McCabe, assistant superintendent of the Fairfield-Suison Unified School District. , where approximately 20,000 students have laptops to take home, but access to ChatGPT is limited.

“We don’t want situations where our students turn in essays and demonstrate that their level of knowledge exceeds what they actually have,” McCabe said. “They don’t have the opportunity to really participate in learning.”

Dr. Sheila McCabe, Assistant Superintendent of Educational Services for Fairfield-Suson Unified School District, said she and her colleagues have made the decision to restrict student access to ChatGPT due to concerns that it could be used for homework and create the false impression that students may work at a higher educational level than they actually are.

McCabe, however, believed that the day might soon come when sites like ChatGPT would be used in the classroom.

“I could see how a student who might have creative block could start using this as a first draft,” she said. “The conclusion is that we still need to know more.”

ChatGPT energizes Silicon Valley school districts

The NBC Bay Area Investigation Unit conducted a survey of 10 of the largest school districts in the Bay Area to find out which of them denied access to ChatGPT and/or held internal meetings to discuss the impact the website has on more than 300,000 students in this area.

ChatGPT is a product of OpenAI, a San Francisco-based company co-founded by Elon Musk. The company did not respond to NBC Bay Area’s request for comment regarding the ongoing criticism of ChatGPT, nor did Microsoft, which partnered with OpenAI to embed its chatbot technology directly into Microsoft’s Bing search engine.

Microsoft admitted this week that its recently updated Bing search engine is not working properly after users began accusing the chatbot of being overly aggressive and even threatening. Senior investigative reporter Bigad Shaban joins Raj Mathai to discuss the latest developments and a new NBC Bay Area investigation into artificial intelligence and its impact on Silicon Valley educational institutions.


The problem is not so much with the tools themselves, but with the underlying deception.

Irina Raikou, Director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University


“The problem is not so much with the tools themselves, but with the underlying deception,” said Irina Raicu, director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University. “It’s not ‘GPT Chat good or bad?’ The thing is, it’s really disrupting, and in a very short time, some of these processes that have been put in place regarding how to teach and how to evaluate learning.”

ChatGPT sometimes generates incorrect information

Fraud is not the only problem. On its website, ChatGPT admits that it “sometimes” provides “incorrect information” and admits that it tends to give “long answers” to “appear more comprehensive”.

“If you’ve ever taught writing, long, exhaustive-sounding answers are the opposite of good writing,” Raikou said. “You want short answers that are exhaustive and precise.

Although there are programs to detect plagiarism, it is difficult to detect content created by artificial intelligence.

For many students, the question of whether they are technically allowed to use ChatGPT in school work remains unclear, since some teachers are still not aware of its existence. Also, it’s hard to tell when ChatGPT is being used. School districts and universities often pay tens of thousands of dollars for programs that can detect plagiarism, but at the moment they can’t tell exactly when something was created using artificial intelligence.

OpenAI, the creator of ChatGPT, recently introduced its own verification software, but admits that its program can only correctly identify AI-generated content 26% of the time.

In addition, in 9% of cases, it mistakenly marks text written by a person as created by artificial intelligence.

It’s not very helpful,” said Brian Green. “Especially if we’re going to falsely accuse students of using an AI-generated text tool.”

The Greens fear that such predicaments will become more common as more AI-powered sites appear online. Google recently announced plans to release their newest version of Bard in the coming weeks. Green said he won’t be submitting another homework essay anytime soon.

“We can have engineers, writers, businessmen, all sorts of people going out into society and find that they just cheated in all their classes,” Green said. “If we cannot appreciate at the most basic level, [students] learned what we taught them, then we will have big problems.”


Contact the Investigation Department

send tips | 1-888-996-TIPS | Email Bigad Shaban

Content Source

Dallas Press News – Latest News:
Dallas Local News || Fort Worth Local News | Texas State News || Crime and Safety News || National news || Business News || Health News

texasstandard.news contributed to this report.

Related Articles

Back to top button