AI Bants Substack

AI Bants Substack

Share this post

AI Bants Substack
AI Bants Substack
AI ERASING HUMAN MINDS: The Cognitive Collapse of the ChatGPT Generation According to MIT

AI ERASING HUMAN MINDS: The Cognitive Collapse of the ChatGPT Generation According to MIT

As students defer to computers, this groundbreaking study reveals why educators are genuinely terrified.

AI Bants's avatar
AI Bants
Jun 24, 2025
∙ Paid
1

Share this post

AI Bants Substack
AI Bants Substack
AI ERASING HUMAN MINDS: The Cognitive Collapse of the ChatGPT Generation According to MIT
Share

The New Performance Cheat Code

These days, teachers don’t mark essays—they vet them. “I spend more time detecting AI than teaching” is the new staff-room joke.

By submission five, the pattern is undeniably suspicious: flawless grammar, American spellings—from students spoon-fed British English since kindergarten—recycled adverbs, and a tone that reads more like a PhD student amped up on caffeine than a 15-year-old who thinks ‘syntax’ is a new type of protein supplement.

Fool me once? Maybe it’s rare talent. Fool me five times? That’s a new generation happy to fake it—even while their brains rot from lack of genuine neural activity.

Adios dopamine. Hola everlasting mediocrity. At least for as long as you’ve got ChatGPT at your fingertips.

“The Problem Is: Your Assignment’s Too Good”

Bog-standard plagiarism and paid essays used to be the problem. Today, it’s AI-generated perfection that has teachers questioning the very foundations of the education system.

Now student submissions arrive pre-formatted, curiously polished and with enough hallucinated howlers to leave even the best teachers doubting the future of learning. Because if students are satisfied to defer the very essence of schooling to technology, it begs the question—what’s the fucking point?

Thanks for reading AI Bants' Substack. Sharing this post helps keep the work alive! Hit the button to show your support.

Share

The rapid rise of AI co-dependency

AI adoption in academic settings has surged dramatically in 2024, with general business AI usage reaching 65% according to McKinsey's survey, while Copyleaks reported a 76% year-over-year increase in AI-generated student content.

Experts warn we're heading toward what some are calling a 'cognitive dependency crisis.' Educational institutions worldwide are scrambling to rethink their methodologies, and to make learning meaningful enough again so that students don't become passive collaborators in the outsourcing of their own cognition.

So when an average student turns in something indistinguishable from professional-grade copy, the key question isn't how they did it—it's ‘why?’ Is it simply that ChatGPT has opened yet another door to the path of least resistance? At least this would explain the ‘stampede’ and ‘bottlenecks’ that choke Open AI servers with increasing regularity. In something akin to a sly Faustian pact, the merely curious have graduated to the lazy, and the truly lazy have shifted gears to out and out AI abuse.

What’s more, the MIT evidence supports these concerns in ways that should alarm every educator.

The Disappearing Act: Reading—and Remembering—on Autopilot

Randomly pick a few students to cross-check points in their recent essays and a good percentage will formulate a perplexed look, eyes glazing over as they futilely attempt to recall what they wrote. Because, for many, the process of ‘study’ and ‘work’ now boils down to this:

Prompt AI → Copy output → Paste output → Submit → Move on

Certainly efficient. And who can blame them? When a machine can whip up a polished answer in less time than it takes to flick to page 35, paragraph 2, why break a sweat digging into the weeds of a topic when there’s so much left to watch on Netflix?

When it’s obvious that human thinking is being more readily outsourced, it’s time to assess what the true impacts of cognitive delegation really are.

The results? Nothing short of jaw-dropping.

The Neural Evidence

MIT's groundbreaking study did something no one had attempted before—they studied how students think when using AI. Using EEG brain scans, researchers tracked 54 students across multiple writing sessions to see what happened when they relied on different levels of technological support.

Brain-only users: Showed the strongest neural activity—79 distinct brain connections firing simultaneously as students wrestled with ideas, recalled information, and formed arguments, as reported in coverage of the study.

Search engine users: Displayed moderate brain engagement as they supplemented their thinking with external research.

ChatGPT users: Exhibited the weakest brain activity—just 42 neural connections, as reported in coverage of the study.

The more external support students used, the less their brains actually worked. Think of it like muscle atrophy—when you stop using your thinking muscles, they weaken. Researchers observed what they called "cognitive debt"—while students gained immediate convenience, they were inadvertently undermining their long-term intellectual capacity, a bit like maxing out credit cards for instant rewards while bankrupting the future.

Educational Institutions Fight Back

Teachers are adapting from educators to detectives, armed with increasingly sophisticated methods to separate genuine learning from algorithmic shortcuts. The new arsenal includes impromptu oral presentations, real-time follow-up questions, and the deceptively simple challenge: “Walk me through your thinking here.”

Why?

Because nothing exposes cognitive freeloading quite like a student who can’t retrace the path to their own "brilliant" conclusions.

Bespoke prompts are gaining popularity too:

"Describe the mural beside the cafeteria and why Kafka would have hated it."

Try nailing that one without revealing you’re part of a genuine conspiracy, ChatGPT!

Because, while AI can fake a summary (sometimes tripping harder than Hunter S. Thompson in Fear and Loathing) it still trips over specifics that only a human observer—preferably one who's gotten lost in the school hallways—could answer.

The New Assessment Arsenal

If students want to play cat and mouse with AI, teachers are proving they're excellent hunters. Educators worldwide are developing ingenious methods to separate genuine learning from automated crutches:

  • Hyper-local prompts: References to specific campus locations, recent school events, or personal experiences.

  • Live presentations: Students must defend their work in real-time Q&As.

  • Process documentation: Requiring drafts, revision notes, and reflection essays.

  • Surprise pivots: Changing the topic mid-assignment to test genuine understanding.

Laugh So You Don’t Cry: Tales from the Modern Classroom

Teaching used to be about inspiring young minds. Now, in a world where education feels like it’s devolving into parody, tired of ending up with the custard pie in their faces, teachers have turned detective, taking the most egregious student-AI transgressions into the staffroom to have a laugh of their own. Gems like:

Did you hear about...

  • The student citing Shakespeare’s Lost Diaries in his essay?

  • The one who described flipping burgers as strategically facilitating culinary service delivery?

  • The student who left AI-generated footnotes intact—because nothing says “original work” like citing Perplexity.

  • An essay quoting DefinitelyNotFakeNews.com as “hard truth”?

  • The classic case of hypertext markup symbols left in the text—the student failed to delete them, possibly because she didn’t bother to read the output?

  • The one who referred to emptying bins as “executing waste management protocol.”?

  • The one who dropped the phrase “paradigm shift” no fewer than five times in a 500-word essay about “a day in my life”?

Promoting cognitive decline into moments of mirth might be the best medicine for teachers. But behind the punchlines lies a quiet despair. Not just over cheating—but over what’s really at stake when learning becomes optional and self-deception is just a few clicks away.

The Memory Blackout: When Students Forget Their Own Work

Perhaps the most alarming finding from MIT's research: over 83% of ChatGPT users couldn't accurately quote from essays they had written just minutes earlier. Compare that to brain-only users, where only 11% experienced similar recall difficulties.

This isn't just about essays. Every time we let AI do our creative or analytical work, our brains miss the chance to lay down new neural pathways—making it harder to learn, adapt, and grow.

Perhaps the deeper concern isn’t just that students use AI to repackage what's already online, but that many don’t even read what it spits out. The result is a dire lack of recall and a worrying dependence on machines to do all the work—unchecked and unchallenged. Until it falls under the eyes of a discerning teacher that is!

Keep reading with a 7-day free trial

Subscribe to AI Bants Substack to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 AI Bants
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share