On Thinking with (AI) Machines: Cognitive Debt, Cognitive Surrender and Symbiogenesis
I came across a paper today that I haven’t been able to put down (visual summary here). Published just a few weeks ago, Shaw and Nave from Wharton have extended Kahneman’s dual-process theory to include AI as a third cognitive system, and the empirical results are unsettling in the best possible way.
Quick recap: Kahneman describes System 1 as fast, automatic, intuitive; the part of you that reads a face or blurts out “10 cents” before doing the algebra. System 2 is slow, deliberate, costly; the part you engage when you actually work through a hard problem, and the part humans systematically avoid out of what he calls “cognitive miserliness” (the tendency to take mental shortcuts and avoid effortful thinking whenever possible). Shaw and Nave argue that AI has become System 3: external to the brain, but functioning as a genuine cognitive prosthetic.
Their key finding is what they call “cognitive surrender”: not using AI as a tool you evaluate, but handing your reasoning over to it entirely. Across 1,372 participants, when the AI was deliberately fed wrong answers, 73.2% of participants followed the error without questioning it. Only 19.7% actually overrode the AI correctly. Worse, those with access to a faulty AI performed below the group with no AI at all (31.5% vs 45.8% accuracy), while reporting 11.7 points more confidence. Time pressure made it worse still, driving accuracy down to 12.1% on faulty trials. Even financial incentives barely moved the needle.
This builds directly on what the MIT Media Lab showed in June 2025: repeated LLM use accumulates “cognitive debt”, measurably weaker neural connectivity, poorer memory retention of your own work, and effects that linger even after you stop using the tool.
Sam Altman himself flagged this at his first Senate hearing (May 2023): “I worry that as the models get better and better, the users can have less and less of their own discriminating thought process.” He framed it as one of his core concerns alongside disinformation and economic disruption. The governance dimension is perhaps the sharpest edge here: as AI systems demonstrably outperform humans on reasoning tasks, there is a real risk that institutions, democracies included, progressively defer to AI not because they are coerced to, but because they have simply forgotten how to think otherwise. Cognitive surrender at the individual level scales into something much larger.
So is AI really a “System 3”? It can quickly become one, without proper counter forces, and that is precisely what makes it dangerous. By design and by habit, AI should function more like a “System 1.5”: fast and fluent on the surface, but one that a healthy System 2 continuously interrogates rather than passively absorbs. The problem Shaw and Nave document is that the default is the opposite, which immediately raises the harder question: how do we actually fix this? Through interface design, through education, through deliberate friction built into the tools themselves?
What I keep turning over is the deeper frame. I’ll be writing soon about What Is Intelligence? (MIT Press, September 2025) by Blaise Agüera y Arcas, Google’s CTO of Technology and Society. The book defines intelligence as fundamentally predictive: any system, biological or artificial, that models the world in order to anticipate it. But its most powerful argument centres on symbiogenesis: the idea, drawn from evolutionary biology, that complexity does not arise through competition alone but through the merging of formerly separate entities into new cooperative wholes. Mitochondria were once free-living bacteria. Multicellular organisms were once colonies. And now, Agüera y Arcas argues, we are living through the next major transition: human cognition merging with machine cognition into something that reproduces and evolves as a unit. This part of the argument is so self-contained and consequential that it has been published separately as a small book, What Is Life?. Not AI as external tool, but as something we are metabolising, where the boundary between “me thinking” and “it thinking for me” dissolves progressively, and where, as with all prior symbiogenetic transitions, the individuals that refuse the merger may simply be left behind.
