Digital Headlines

Latest Tech News At your Fingertips

Thursday, January 5, 2023

As NYC public schools block ChatGPT, OpenAI says it’s working on ‘mitigations’ to help spot ChatGPT-generated text

New York City public schools have restricted access to ChatGPT, the AI system that can generate text on a range of subjects and in various styles, on school networks and devices. As widely reported this morning and confirmed to TechCrunch by a New York City Department of Education spokesperson, the restriction was implemented due to concerns about “[the] negative impacts on student learning” and “the safety and accuracy” of the content that ChatGPT produces.

“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” the spokesperson told TechCrunch via email, adding that the restricted access came in response to requests from schools.

It’s not a ban per se. The New York City public school system is using the same filter for ChatGPT that it uses to block other apps and websites — e.g. YouTube and Facebook — on school property. Individual schools can request to have ChatGPT unblocked, and the spokesperson said that the New York City Department of Education would “welcome” the opportunity to have a conversation with OpenAI, the startup behind ChatGPT, about how the tool could be adapted for education.

As for OpenAI, when reached for comment, a company spokesperson said that OpenAI is developing “mitigations” to help anyone spot text generated by ChatGPT. That’s significant. While TechCrunch reported recently that OpenAI was experimenting with a watermarking technique for AI-generated text, it’s the first time OpenAI has confirmed that it’s working on tools specifically for identifying text that came from ChatGPT.

“We made ChatGPT available as a research preview to learn from real-world use, which we believe is a critical part of developing and deploying capable, safe AI systems. We are constantly incorporating feedback and lessons learned,” the OpenAI spokesperson said. “We’ve always called for transparency around the use of AI-generated text. Our policies require that users be up-front with their audience when using our API and creative tools … We look forward to working with educators on useful solutions, and other ways to help teachers and students benefit from AI.”

ChatGPT has an aptitude for answering questions on topics ranging from poetry to coding, but one of its biggest flaws is its ability to sometimes give answers that sound convincing but aren’t factually true. That led Q&A coding site Stack Overflow to temporarily ban users from sharing content generated by the AI, saying that ChatGPT made it too easy for users to flood the platform with dubious answers. More recently, the International Conference on Machine Learning, one of the world’s largest AI and machine learning conferences, announced a prohibition on papers that include text generated by ChatGPT and other like AI systems for fear of “unanticipated consequences.”

In education, the debate has revolved largely around the cheating potential. Perform a Google search for “ChatGPT to write school papers,” and you’ll find plenty of examples of educators, journalists and students testing the waters by wielding ChatGPT to complete homework assignments and standardized essay tests. Wall Street Journal columnist Joanna Stern used ChatGPT to write a passing AP English essay, while Forbes staffer Emma Whitford tapped it to finish two college essays in 20 minutes. Speaking to The Guardian, Arizona State University professor Dan Gillmor recalled how he gave ChatGPT one of the assignments he typically gives his students and found that the AI’s essay would’ve earned “a good grade.”

Plagiarism is another concern. Like other text-generating AI systems, ChatGPT — which is trained on public data, usually collected without consent — can sometimes regurgitate this information verbatim without citing any sources. That includes factual inaccuracies, as alluded to earlier, as well as biased — including blatantly racist and sexist — perspectives. OpenAI continues to introduce filters and techniques to prevent problematic text generations, but new workarounds pop up every day.

Despite those limitations and issues, some educators see pedagogical potential in ChatGPT and other forms of generative AI technologies. In a recent piece for Stanford’s Graduate School of Education website, Victor Lee, associate professor of education at Stanford, noted that ChatGPT may help students “think in ways they currently do not,” for example by helping them discover and clarify their ideas. Teachers may benefit from ChatGPT as well, he posits, by generating many examples for students of a narrative where the basic content remains the same but the style, syntax or grammar differ.

“ChatGPT may [allow] students to read, reflect and revise many times without the anguish or frustration that such processes often invoke, [while] teachers can use the tool as a way of generating many examples and nonexamples of a form or genre,” Lee said in a statement. “Obviously, teachers are less delighted about the computer doing a lot of legwork for students. And students still need to learn to write. But in what way, and what kinds of writing? A … side effect of this new medicine is that it requires all of us to ask those questions and probably make some substantive changes to the overarching goals and methods of our instruction.”

In any case, the New York City public schools policy, which appears to be the first of its kind in the country, will surely force the conversation at school districts elsewhere. As use of the tech grows — ChatGPT had over a million users as of December — independent researchers and companies have begun piloting tools to detect the use of AI-generated text in student submissions. Some educators might choose to embrace them, while others, like Lee, instead encourage the use of ChatGPT as an assistive writing tool.

As NYC public schools block ChatGPT, OpenAI says it’s working on ‘mitigations’ to help spot ChatGPT-generated text by Kyle Wiggers originally published on TechCrunch

from TechCrunch https://ift.tt/QJFV0ia

No comments:

Post a Comment