作者:Online: Connected Classroom, LinkedIn
Philosophy majors outperform every other major on verbal and logical reasoning tests. They develop curiosity and open-mindedness that other students don't. Yet, philosophy teaches no practical skills, produces no measurable outcomes, and asks questions with no clear answers.
So what's happening here?
Two recent studies reveal why the cognitive processes that make philosophy students better thinkers represent exactly what we risk losing when students delegate their intellectual work to AI systems. Philosophy students spend years wrestling with questions that have no obvious solutions. The exact opposite of what AI excels at. But somehow, this apparently impractical activity produces measurably superior thinking abilities.
Current AI is like a brilliant student who has read every cookbook ever written but has never once stepped foot in a kitchen. They can write a beautiful menu that looks perfect on paper, but they don't know that you can't bake if the oven is broken.
This captures what Apple researchers discovered by testing AI reasoning models on puzzles of increasing complexity. For simple puzzles, AI performed well. Medium complexity showed AI's step-by-step processing advantages. But high complexity produced a complete performance collapse, not a gradual decline.
Even when researchers provided explicit solutions (essentially perfect "recipes" for solving puzzles), performance didn't improve. The systems still failed at the same complexity threshold.
The limitation isn't knowledge or creativity. AI systems can describe perfect solutions, but cannot execute them when complexity requires maintaining long, interactive instruction chains. Like the cookbook student, they lack the practical understanding needed when theoretical knowledge meets real-life constraints.
Meanwhile, research from Scientific Reports confirms that AI "does not possess curiosity and imagination and cannot escape the boundaries of the known hypothesis and experimental spaces." Most importantly, "there is no aha moment for ChatGPT because all the experimental results are expected, and no anomalies are detected."
AI systems cannot experience intellectual surprise. This is the foundation of genuine learning.
This pattern reveals why students struggle when they delegate complex thinking to AI. Let's say we assign an open-ended investigation: exploring why plants grow differently in various school locations.
Students can handle simple questions with AI assistance: "What causes plant growth?" They can even manage moderate complexity: "Compare growth rates in different school locations." But present them with ambiguity—"Investigate why plants in identical conditions show different growth patterns, considering multiple interacting variables that you must identify yourself"—and both student thinking and AI assistance break down.
The crucial difference here is that humans can develop judgment about what matters most in this situation. A student might realize they need to focus on soil testing over temperature measurements, or decide that watering patterns reveal more than sunlight calculations. They may even fail a few times. But they learn to make strategic decisions when other approaches aren't sufficient.
AI cannot make these judgment calls. Neither can students who've learned to expect algorithmic solutions to complex problems. Research from Pakistan and China involving 285 university students found that AI dependency creates "68.9 percent of laziness in humans" and significantly impacts "loss of human decision-making."
Most concerning, when students rely too heavily on AI for ethical guidance, they risk losing the capacity for moral reasoning. They become consumers of pre-packaged analysis rather than developers of independent judgment.
Philosophy students, by contrast, spend years practicing exactly this kind of sustained inquiry. They learn to construct logical arguments, draw subtle distinctions, and follow ideas to often surprising conclusions. This is how thinking develops.
Philosophy trains students in something AI fundamentally cannot do—sit with genuine uncertainty. When philosophy students encounter questions about consciousness, free will, or justice, they cannot delegate the thinking to external systems. They must develop their own reasoning frameworks.
This tolerance for uncertainty translates into superior performance. Philosophy students develop what researchers call "habits of mind"—the disposition to "explore topics raised in class on their own" and "evaluate the reliability of information." These are precisely the capacities we stand to lose if we constantly choose AI-generated efficiency.
The Apple study confirms that reasoning models initially increase their "thinking" as problems get harder, but "near this collapse point, they begin reducing their reasoning effort as problem complexity increases." It shows AI systems don't fail like humans do. When humans struggle with a difficult problem, they typically try harder and think longer before giving up. These AI systems seem to "sense" when they're approaching their limits and reduce effort rather than persist.
It's too late to ban AI. But let's recognize what it cannot provide. Cognitive development comes from wrestling with questions that have no answers.
I've found success with deliberately creating cognitive dissonance by presenting conflicting expert opinions without revealing which is "correct," or assigning students to argue for positions they initially disagree with. If you give students three different historical interpretations of the same event, instead of asking "Which is right?" you ask "What different values and assumptions lead to these interpretations?"
This approach mirrors what Socratic questioning achieves in fostering genuine inquiry. Students cannot delegate this intellectual work to AI because the value lies in the thinking process itself, not the final answer.
Educational environments must create spaces where students practice sustained inquiry without algorithmic assistance. This means designing assignments that cannot be delegated to AI, not through technical restrictions, but through focus on authentic reasoning that requires defending thought processes in real-time.
The ancient art of inquiry is cognitive development. When we outsource this work to systems that cannot experience intellectual surprise, we risk developing people who can think through algorithms but cannot think through problems.
Philosophy majors outperform other students precisely because they practice what AI can't do: thinking without knowing where the thinking will lead.
References
Vázquez, M., & Prinzing, M. (2025). Studying philosophy does make people better thinkers, according to new research on more than 600,000 college grads. The Conversation.
Shojaee, P., Mirzadeh, I., Alizadeh, K., Horton, M., Bengio, S., & Farajtabar, M. (2025). The illusion of thinking: Understanding the strengths and limitations of reasoning models via the lens of problem complexity. Apple Research.
Ding, A. W., & Li, S. (2025). Generative AI lacks the human creativity to achieve scientific discovery from scratch. Scientific Reports, 15, 9587.
Ahmad, S. F., Alam, M. M., & Rehmat, M. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10, 311.
Andrade, M., & Mahinay, R. B. (2025). Socratic questioning in fostering inquiry and critical thinking in classrooms. Educational Research Journal.