AI Is Making Us Faster Learners and Worse Thinkers at the Same Time

Published on
1583words
8 min read
Authors

“You can outsource your thinking, but you cannot outsource your understanding.”

— quoted by

Andrej Karpathy, Sequoia Ascent 2026 notes

I am an ML engineer. I use Claude every day, for code, for reading papers I would not otherwise have time for, for sanity-checking ideas before they reach a PR. So when I started noticing that AI-assisted understanding was not sticking, I had a problem.

The pattern was consistent. When I let AI explain something without first attempting it myself, the explanation felt complete in the moment and dissolved by the time I needed it. I went looking for research on this and found a well-documented phenomenon, not a personal failure.

The retention gap

A 2025 randomized controlled trial gave two student groups the same material. One studied traditionally; the other used ChatGPT throughout. Six weeks later, on a surprise retention test, the traditional group held a clear lead — about eleven percentage points.

During the sessions themselves, the ChatGPT group produced higher quality work. Faster problem solving. Better outputs. Every immediate metric pointed up. EDUCAUSE's 2025 framework named the effect "better results, worse thinking."

The pattern shows up across the literature. A 2025 Frontiers in Psychology synthesis finds decline in cognitive abilities, lower retention, and increased cognitive offloading among regular AI users. An MIT Media Lab EEG study shows reduced neural connectivity in memory and creativity networks during ChatGPT-assisted work.

The other side is real too. A Harvard RCT in Scientific Reports (June 2025) found AI tutors outperformed in-class active learning in the short term — students with AI tutors solved novel problems faster and reported higher engagement. Both findings hold simultaneously. AI improves immediate output quality and degrades long-term retention.

Why this happens

Learning that sticks requires what cognitive scientists call desirable difficulties: effortful retrieval, spaced repetition, elaboration, interleaving. The struggle of trying to remember something you half-know, of working through a problem you do not fully understand, of connecting new material to what you already know. That effort is not the cost of learning. It is the learning.

AI removes the struggle. You ask, you receive a fluent answer in seconds, your brain receives the signal that the problem is solved, and the reward circuit fires. Nothing was retrieved, elaborated, or connected. You consumed.

The feeling of understanding that follows is the fluency illusion. Your brain mistakes ease of processing for depth of knowledge. It is convincing and almost entirely wrong.

This is not an argument against AI

AI tutoring produces real benefits: faster initial exposure, better personalization, higher short-term performance. The question is whether you are using it in a way that builds capability or one that rents a simulation of it.

What scale looks like

Almost every university student now uses AI for their work, and most US high schoolers do too. Turnitin reports that mostly AI-generated essays have multiplied several-fold in two years and now make up a meaningful share of all submissions.

Among working professionals the pattern is different. Millions of mid-career engineers, analysts, and managers are upskilling on AI through self-directed learning on platforms like edX, often because their employers do not provide structured training.

The distinction between students and professionals is not age. It is motivation structure. Students are optimizing for grades, and AI is good at producing grade-worthy work. Professionals are trying to use what they learn, so when AI-assisted understanding breaks down in production, they get a feedback signal and adjust. The student population is largely missing that signal.

What governments are betting on

The institutional response is fragmented.

South Korea committed $740 million to AI teacher training and digital textbook rollouts in schools. China has made AI a mandatory subject for primary and secondary students. Singapore is scaling national AI literacy programs to students and adult learners alike. These are infrastructure-level bets — treating AI as another competence the next generation has to master, not a compliance problem to police.

Other places have moved in the opposite direction. New York has floated legislation to ban most AI in classrooms before high school. The EU AI Act classifies AI that scores exams or steers learning as high-risk and will require mandatory human oversight when its rules take effect in 2027. The patchwork of US state-level K-12 guidance ranges from prohibition to encouragement, with no consistent standard.

Most universities have converged on instructor discretion plus mandatory disclosure. Harvard, Stanford, and Oxford explicitly name generative AI in their integrity policies and treat undisclosed use as cheating. Detection tools struggle with paraphrased and mixed text, so the universities adapting well are redesigning assessment instead: oral exams, observed work, iterative portfolios, post-submission reflections that make thinking visible.

Policy will stay fragmented for years. The cognitive science will not.

How I use AI now

What changed in my own practice is small but consistent.

Attempt before asking. Before bringing AI into a learning task, I spend a fixed amount of time, often 15 to 30 minutes, attempting it on my own. I write down my current understanding, try the problem, notice where I get stuck. Then I bring AI in. The retention difference is large because I am arriving with gaps to fill, not a blank slate to consume into.

AI as adversary, not narrator. "Explain transformer attention to me" produces borrowed understanding. "Here is my current mental model of transformer attention. Where am I wrong?" produces real learning. The prompts that consistently work for me ask the AI to find the strongest argument against an approach I just described, point at what I am probably missing in my understanding of something, give me a problem that would break my current mental model, or tell me what a senior practitioner would know that I do not. They keep the thinking on my side.

Teach it back. After an AI-assisted session, I close the conversation and write what I learned in my own words, to no one. Then I open a new conversation and explain it to the AI from scratch and ask it to find what I got wrong. This is the retrieval-and-elaboration cycle, with AI as a feedback mechanism rather than a narrator.

Match the tool to the stakes. AI use should decrease as stakes and transfer requirements increase. Where you actually need to perform, you need the capability, not the tool.

StageHow I use AI
First exposureFreely, for fast mental-model building
Practice problemsAttempt independently first, AI only on stuck
Real production workNo crutch. This is where encoding happens
Retrospective reviewUse AI to surface gaps after the fact

Calibrated skepticism. The dangerous failure mode is not wrong answers, which you can usually check. It is right-sounding answers that are subtly incomplete or misframed for your context, which you cannot distinguish from correct ones unless you already know the material. After every significant AI explanation I ask how I would verify it. If I cannot answer that, I have not understood it. Find the primary source, run the experiment, build the thing.

I am not going to stop using AI. I do not think anyone realistically can or should. But I have stopped treating the feeling of understanding as evidence of understanding. The fluency illusion is strong. The test is whether I can recover, explain, or apply something a week later without help. If I cannot, I borrowed it.

The shortcut and the long game look the same from the outside until you need to use what you learned.

What matters

  1. 1AI improves short-term output and degrades long-term retention. Both are true at the same time.
  2. 2The mechanism is the removal of effortful retrieval, the cognitive struggle that builds durable memory.
  3. 3Attempt before asking. The few minutes of independent attempt before bringing AI in is what makes the explanation stick.
  4. 4Use AI as an adversary that challenges your understanding, not as a narrator that delivers it.
  5. 5The fluency illusion: ease of processing is not depth of knowledge. The test is whether you can recover it later without help.