英语轻松读发新版了,欢迎下载、更新

University students offload critical thinking, other hard work to AI

2025-05-19 10:00:00 英文原文

作者:Jill Barshay

Tech evangelists may be dazzled by the promise of AI, but two well-designed new studies — one in China and one by a leading AI company — signal trouble ahead. 

The two studies were conducted by a team of international researchers who studied how Chinese students were using ChatGPT to help with English writing, and by researchers at Anthropic, the company behind the AI chatbot Claude. They both come to a similar conclusion: Many students are letting AI do important brain work for them.

The first study was published online in the British Journal of Education Technology in December 2024 and provocatively titled, “Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance.” Researchers in China and Australia set up a laboratory experiment in the summer of 2023 in which 117 Chinese students were asked to read several texts, write an essay on them and then revise the essay. All of the work was in English, which wasn’t the students’ native tongue.  

The students were randomly assigned to one of four conditions while completing the work. One group was allowed to use ChatGPT. The second group could consult a human writing coach. A third group was given a writing toolkit with checklists of things to review. A fourth group had no extra help and had to complete the assignment alone. In addition to writing essays, all the students took tests to measure how much they learned and filled out surveys on their emotions.

To the researchers’ surprise, the students in the ChatGPT group improved their essays the most — even more than the group with human writing teachers. But the ChatGPT group didn’t learn more about the topic they read and wrote about, nor did the ChatGPT students feel more motivated to write and learn than students in the other three groups. Indeed, there were signs that the students who enjoyed the assignment the most and maintained interest were those who merely received the writing checklist but otherwise completed the assignment without AI or human handholding.

As the researchers analyzed how students completed their work on computers, they noticed that students who had access to AI or a human were less likely to refer to the reading materials. These two groups revised their essays primarily by interacting with ChatGPT or chatting with the human. Those with only the checklist spent the most time looking over their essays. 

The AI group spent less time evaluating their essays and making sure they understood what the assignment was asking them to do. The AI group was also prone to copying and pasting text that the bot had generated, even though researchers had prompted the bot not to write directly for the students. (It was apparently easy for the students to bypass this guardrail, even in the controlled laboratory.) Researchers mapped out all the cognitive processes involved in writing and saw that the AI students were most focused on interacting with ChatGPT. 

“This highlights a crucial issue in human-AI interaction,” the researchers wrote. “Potential metacognitive laziness.” By that, they mean a dependence on AI assistance, offloading thought processes to the bot and not engaging directly with the tasks that are needed to synthesize, analyze and explain.

“Learners might become overly reliant on ChatGPT, using it to easily complete specific learning tasks without fully engaging in the learning,” the authors wrote.

The second study, by Anthropic, was released in April during the ASU+GSV education investor conference in San Diego. In this study, in-house researchers at Anthropic studied how university students actually interact with its AI bot, called Claude, a competitor to ChatGPT. That methodology is a big improvement over surveys of students who may not accurately remember exactly how they used AI.

Researchers began by collecting all the conversations over an 18-day period with people who had created Claude accounts using their university addresses. (The description of the study says that the conversations were anonymized to protect student privacy.) Then, researchers filtered those conversations for signs that the person was likely to be a student, seeking help with academics, school work, studying, learning a new concept or academic research. Researchers ended up with 574,740 conversations to analyze. 

The results? Students primarily used Claude for creating things (40 percent of the conversations), such as creating a coding project, and analyzing (30 percent of the conversations), such as analyzing legal concepts.

Creating and analyzing are the most popular tasks university students ask Claude to do for them

Source: Anthropic, April 2025

Anthropic’s researchers noted that these were higher-order cognitive functions, not basic ones, according to a hierarchy of skills, known as Bloom’s Taxonomy.

“This raises questions about ensuring students don’t offload critical cognitive tasks to AI systems,” the Anthropic researchers wrote. “There are legitimate worries that AI systems may provide a crutch for students, stifling the development of foundational skills needed to support higher-order thinking.”

Anthropic’s researchers also noticed that students were asking Claude for direct answers almost half the time with minimal back-and-forth engagement. Researchers described how even when students were engaging collaboratively with Claude, the conversations might not be helping students learn more. For example, a student would ask Claude to “solve probability and statistics homework problems with explanations.” That might spark “multiple conversational turns between AI and the student, but still offloads significant thinking to the AI,” the researchers wrote.

Anthropic was hesitant to say it saw direct evidence of cheating. Researchers wrote about an example of students asking for direct answers to multiple-choice questions, but Anthropic had no way of knowing if it was a take-home exam or a practice test. The researchers also found examples of students asking Claude to rewrite texts to avoid plagiarism detection.

The hope is that AI can improve learning through immediate feedback and personalizing instruction for each student. But these studies are showing that AI is also making it easier for students not to learn.

AI advocates say that educators need to redesign assignments so that students cannot complete them by asking AI to do it for them and educate students on how to use AI in ways that maximize learning. To me, this seems like wishful thinking. Real learning is hard, and if there are shortcuts, it’s human nature to take them.

Elizabeth Wardle, director of the Howe Center for Writing Excellence at Miami University, is worried both about writing and about human creativity.

“Writing is not correctness or avoiding error,” she posted on LinkedIn. “Writing is not just a product. The act of writing is a form of thinking and learning.” 

Wardle cautioned about the long-term effects of too much reliance on AI, “When people use AI for everything, they are not thinking or learning,” she said. “And then what? Who will build, create, and invent when we just rely on AI to do everything?

It’s a warning we all should heed. 

Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or barshay@hechingerreport.org.

This story about how students use AI was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn't mean it's free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.

关于《University students offload critical thinking, other hard work to AI》的评论


暂无评论

发表评论

摘要

Two recent studies—one from China and another by Anthropic—warn of potential negative impacts of AI tools like ChatGPT on student learning. The Chinese study found that while students using ChatGPT improved their essays more than those with human help, they showed less engagement and learning about the topic. Meanwhile, Anthropic’s research indicated university students frequently use its AI bot Claude for complex tasks, potentially hindering foundational skill development. Both studies raise concerns about over-reliance on AI as a crutch, rather than an educational aid.

相关新闻