
Reliance on artificial intelligence is becoming ubiquitous across all domains. We are routinely amazed by the remarkable capabilities of AI. In many cases, it has made work processes efficient and increased productivity.
Recently in this publication, Chris Quinn, the editor, wrote that artificial intelligence tools had “significantly improved the quality of our content, increased the number of substantive stories we publish and expanded into new topic areas.”
I have myself advocated that we should embrace Generative AI in teaching and learning --with some caveats.
Newsrooms the world over are deploying artificial intelligence to handle a variety of tasks, especially copy editing, AI-generated news content, analysis, and personalization of the dissemination of news to subscribers.
Yet, some of the concerns about negative cognitive effects for learners of Large Language Model-based Generative AI tools — such as ChatGPT, Grok, Anthropic, and Gemini — are becoming evident.
These concerns go beyond what is widely acknowledged to be how exclusively relying on AI may “threaten truth.” In media and education, we need to be clear-eyed about negative epistemic and cognitive effects of GenAI.
For example, a recent study by MIT Media Lab should give us pause. Over time, exclusive reliance on GenAI for writing may impact overall critical thinking, memory, and curiosity. Hence, as we implement GenAI in teaching and learning, we must consider the negative cognitive effects for young learners.
We should not make the same mistake that we made with social media. We now know that long-term cognitive effects of social media and smartphones have fostered a mental health crisis in our youth. Cognitive issues and learning disabilities have surged among the young due to social isolation, anxiety, and attention deficit.
The MIT study found that relying exclusively on ChatGPT for writing has more downsides than upsides for learners. Researchers brought in 54 learners between the ages of 18 and 39 to participate in the experiment. They separated them into three groups. They tasked all groups to write multiple essays using college-admissions-test-type prompts. The first group used ChatGPT, the second group used Google’s search engine, and the third “brain only” group did not rely on any algorithmic tools. As the participants in the study wrote the essays, their brain activity was recorded using EEG scans.
The ChatGPT group showed the least amount of brain activity as measured by EEG, and they “consistently underperformed at neural, linguistic, and behavioral levels.”
The group that relied exclusively on ChatGPT had diminished thinking and memory. The most revealing finding was that in the group that used ChatGPT for writing the essays, only 20% could recall what they had written, and 16% could not even recognize the text they had written.
As ChatGPT users wrote more essays, they progressively relied more on copy-and-paste. The ChatGPT group wrote similar essays. The language was wooden, and narratives were similar.
The brain-only group exhibited the highest neural activity across all bands — alpha, theta, and delta — and that suggested new ideas, creativity in the semantic use of language, and better memory formation. The Google Search group showed relatively higher brain activity, and when this group was permitted to use ChatGPT to rewrite their essays, they improved upon the earlier versions.

Some have suggested bringing back blue book exams and in-classroom writing assignments. However, I think educating learners and parents on the negative cognitive effects of exclusive reliance on GenAI tools should be the preferred means. As educators, we have our task cut out for us. We need to be innovative in how we persuade students that they may use GenAI tools for research -- but not first drafts.
Kumar is professor of communication in the Levin College of Public Affairs and Education at Cleveland State University.
Have something to say about this topic?
* Send a letter to the editor, which will be considered for print publication.
* Email general questions about our editorial board or comments or corrections on this opinion column to Elizabeth Sullivan, director of opinion, at esullivan@cleveland.com.
If you purchase a product or register for an account through a link on our site, we may receive compensation. By using this site, you consent to our User Agreement and agree that your clicks, interactions, and personal information may be collected, recorded, and/or stored by us and social media and other third-party partners in accordance with our Privacy Policy.