作者:Online: Connected Classroom, LinkedIn
Big Four consulting firm Deloitte just repaid $291,000 to the Australian government after admitting it used ChatGPT to produce a compliance review riddled with errors. The report contained nonexistent references, fabricated citations, and invented court cases. University of Sydney academic Christopher Rudge said that there were multiple "hallucinations" that appeared unsupported by any actual evidence.
This wasn't a student cheating on a homework assignment with ChatGPT. This was a multi-billion-dollar consultancy firm whose employees decided to outsource their expertise to an algorithm. They got exactly what happens when highly-paid professionals stop thinking. Garbage wrapped in professional formatting.
AI wasn’t the failure here. It did what it always does and completed the user's request. The consultants failed because they didn't know how to think with the tool. They treated it like what Paulo Freire called a "banking education" system: deposit your request, withdraw your answer, never question the transaction.
The Deloitte example should put every educator on notice. If professionals of this caliber are outsourcing cognition to AI, why would we expect students to behave differently?
Unfortunately, we are entering a period of time where everyone is using AI and very few people want to admit it. Institutions teaching students are doing the exact same thing.
According to Anthropic's 2025 education report, professors rate AI-assisted grading as the "least effective" educational application. Yet 48.9% of their grading conversations with AI involve full automation, letting the algorithm do the work they're paid to do. They're also using AI to create comprehensive teaching materials and course content.
Then they turn around and penalize students for identical behavior.
The message students receive is clear. AI use is a professional necessity when we do it, academic dishonesty when you do it. This hypocrisy doesn't just undermine academic integrity but teaches students that the goal is hiding AI use rather than learning to use it well.
Consistent LLM users exhibit the weakest brain connectivity patterns. According to a recent MIT study, users couldn't accurately quote their own work and self-reported the lowest sense of ownership over what they'd created.
The researchers describe this as accumulating "cognitive debt". It's similar to Freire's banking metaphor we looked at earlier. Each time you deposit a request and withdraw an answer without critical engagement, your brain learns it doesn't need to build those neural pathways because the user doesn’t question the transaction. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.
Students who've learned to use AI as a deposit machine can't explain why they chose specific evidence. If you ask, "Why this example?", they go blank. They become defensive when questioned about their reasoning process. The work may look polished, but there's no thinking underneath it. They can't connect conclusions to broader patterns. They lose interest once the assignment is submitted.
These aren't signs of laziness but actually signs of disengagement from the actual cognitive work that goes into learning. When you have no investment in the thinking process, you become detached from the work's accuracy or success.
Students who've learned dialogic engagement with AI behave completely differently. They ask follow-up questions during class discussions. They can explain their reasoning when challenged. They challenge each other's arguments using evidence they personally evaluated. They identify limitations in their own conclusions. They want to keep investigating beyond the assignment requirements.
The difference is how they used it.
This means approaching every AI interaction as a sustained interrogation. Instead of "write an analysis of symbolism in The Great Gatsby," students must "generate an AI analysis first, then critique what it missed with their own interpretations of the symbolism. “What assumptions does the AI make in its interpretation and how could it be wrong?" “What would a 20th-century historian say about this approach?” “Can you see these themes present in The Great Gatsby in your own life?”
Using AI effectively should still take considerable time as you interrogate, correct, and modify outputs. You're engaging in what feels like human dialogue, a back-and-forth dance where you bring expertise and the AI brings information processing.
When students discover that AI can reflect their own ideas and personality through this dialogue, they begin demanding this type of cognitive interaction. They recognize the quality difference between depositing a prompt and engaging in collaborative investigation.
Teachers can't lecture students about AI ethics while automating their own grading and feedback in impersonal ways. We can't penalize students for using tools we're using ourselves.
To model dialogic engagement, be transparent. If you use AI to draft an assignment, organize data, or write feedback, show students the process. Show them the questions you ask, the outputs you rejected, the reasoning you applied, and the personal expertise you provided. The goal shouldn't be hiding AI use to maintain a perceived edge, but demonstrating thoughtful use and transparency.
Deloitte's $291,000 mistake isn't an anomaly but the beginning of a workplace model where everyone is using AI but not wanting to admit it. It's a preview of what happens when people with access to powerful tools don't know how, or don’t care to think.
If we don't teach students to engage AI dialogically, we're training them to become the consultants who produce garbage, the professionals who can't explain their own reasoning, the workers who accumulate so much cognitive debt they can't function without algorithmic assistance.
Let’s stop pretending students won't use it. Teach them that expertise means knowing how to interrogate outputs, identify limitations, and bring human judgment to every interaction.
Give an LLM to people who don't think, and you get Deloitte's report. Give it to someone who knows how to think, and you leverage their expertise significantly. We need to make this change now, before an entire generation learns that thinking is optional.
References
Bent, D., Handa, K., Durmus, E., Tamkin, A., McCain, M., Ritchie, S., Donegan, R., Martinez, J., & Jones, J. (2025). Anthropic Education Report: How Educators Use Claude. Anthropic. https://www.anthropic.com/news/anthropic-education-report-how-educators-use-claude
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2024). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv. https://doi.org/10.48550/arXiv.2506.08872
Melisa, R., Ashadi, A., Triastuti, A., Hidayati, S., Salido, A., Ero, P. E. L., Marlini, C., Zefrin., & Fuad, Z. A. (2025). Critical Thinking in the Age of AI: A Systematic Review of AI's Effects on Higher Education. Educational Process: International Journal, 14, e2025031. https://doi.org/10.22521/edupij.2025.14.31