According to Futurism, a Dartmouth University professor and former McKinsey analyst, Scott Anthony, says his students are showing “utter terror” towards using AI, not excitement. He specifically notes their fear goes beyond cheating to a deeper anxiety about losing their critical thinking skills and even their humanity. This contrasts sharply with his tenured colleagues, who are eager to experiment with the latest large language model software. The report highlights a recent MIT study from earlier this summer that found participants using LLMs for tasks like writing essays experienced a “cognitive cost,” becoming less inclined to critically evaluate the AI’s output. That brain-only group in the study not only reported higher satisfaction but also demonstrated higher brain connectivity.
The Professor and the Precariat
Here’s the thing that really sticks out. Anthony points out this stark generational divide: secure professors are playing with the new toy, while students staring down a volatile job market are scared of it. And you can’t really blame them. For a tenured academic, AI is a fascinating tool or research subject. For a 20-year-old, it’s the potential agent of their own obsolescence. The fear isn’t just about getting a job, though. It’s about what happens to you if you let the machine do too much of the thinking. That’s a profoundly human concern, and it’s way more nuanced than just worrying about plagiarism.
What the Science Suggests
So, are the students just being paranoid? The MIT study Anthony references suggests maybe not. The finding that LLM use creates an “echo chamber moderated by AI” is chillingly accurate. When the machine gives you a polished, coherent answer, your brain’s incentive to poke holes in it or build a competing argument from scratch plummets. Why wrestle with a complex idea when ChatGPT can hand you a tidy summary? But that wrestling is where real learning and intellectual muscle happen. The study’s kicker? The “brain only” group was happier with their work and their brains were more active. There’s a deep satisfaction in genuine creation that outsourcing to an LLM simply can’t provide.
Losing the Taste for Struggle
Basically, this is about meta-cognition—thinking about thinking. The real risk isn’t that AI will write our essays for us. It’s that we’ll forget how to write an essay, or solve a problem, or craft an original thought, because the cognitive friction is gone. We’ll lose our taste for the productive struggle. And look, I use AI tools every day. They’re incredible. But I think the students are onto something crucial. If you don’t consciously guard against it, these tools can make you intellectually lazy. You stop being a critic and start being a curator of AI output. Is that really using technology, or is it letting technology use you?
A Messy but Necessary Conversation
Anthony is right that this period is “very messy.” We’re in the awkward phase where the technology is outpacing our social, educational, and psychological frameworks for dealing with it. The professors’ enthusiasm and the students’ terror are two sides of the same disruptive coin. The challenge for this generation won’t just be competing with AI for jobs. It’ll be competing with AI for their own attention, their own cognitive capacity, and their own sense of intellectual agency. Their fear isn’t a sign of technophobia; it might just be the first sign of a very healthy self-preservation instinct. The question is, what are we going to do about it?
