Reading What Is Intelligence? (1): Life Is Computation, and AI as the Next Symbiogenesis
Notes on Chapter 1 of Agüera y Arcas’s What Is Intelligence?, where von Neumann reverse-engineers biology from pure mathematics, and Dawkins turns out to have told only half the story.
I am currently finishing Chapter 1 of Blaise Agüera y Arcas’s What Is Intelligence? (MIT Press / Antikythera, 2025). I may have had to re-read certain passages more than once. It is fascinating, and occasionally dense enough to require a second pass with a pen in hand.
Four ideas from this chapter that genuinely stopped me in my tracks.
1. The universal constructor
How does a living organism produce something as complex as itself?
The question is deeper than it first appears, and reading it I could not help but think of one of the great intellectual crises of the twentieth century. In the early 1900s, Bertrand Russell and Alfred North Whitehead spent a decade trying to ground all of mathematics in pure logic with their Principia Mathematica: a formal system powerful enough to describe everything, including itself. In 1931, Kurt Gödel proved it could not be done. His incompleteness theorems showed that any formal system rich enough to describe arithmetic will contain true statements it cannot prove from within itself. Ludwig Wittgenstein had circled a similar intuition a decade earlier in the Tractatus: a language cannot fully account for itself from the inside. There is a hard ceiling on self-description.
Agüera y Arcas does not make this connection explicitly. But the problem biology faces is, to my eyes, the same problem translated into chemistry. How does a cell produce a copy of itself without containing a complete self-description, which would itself need to be described, and so on?
In 1951, two years before Watson and Crick even described the structure of DNA, John von Neumann found the way around. His insight: any self-reproducing system needs two components. A machine that reads instructions and builds things (the constructor). And a passive tape of instructions describing what to build (the programme).
The trick that breaks the infinite regress is that the tape is used in two completely different ways during reproduction. First, it is read: the constructor interprets the instructions and builds a copy of the whole system. Then, it is copied: the tape is duplicated mechanically, without being “understood,” and inserted into the new system. The system never needs to contain a description of the description, because the copy step is blind. It duplicates the tape the way a photocopier duplicates a page: faithfully, without parsing the content.
That is what sidesteps Gödel. The cell does not represent itself. It copies itself, and the two operations are fundamentally different.
When molecular biology caught up, it turned out this is exactly what cells do. The ribosome (the molecular machine that builds proteins) reads mRNA (the messenger molecule carrying instructions from DNA) and assembles proteins (interpretation mode). Polymerase (the enzyme responsible for duplicating DNA) copies the genome without parsing its content (copy mode). Von Neumann had reverse-engineered the logic of life from pure mathematics.
2. Life as a phase of matter
If reproduction is computation, what does that make life itself? Not a substance. Not a spark.
Agüera y Arcas argues it is a phase, the way ice, liquid, and gas are phases: a distinct organisational regime of the same underlying atoms. What separates the two regimes is thermodynamic. Inert matter obeys the second law of thermodynamics without resistance: it degrades, disperses, tends toward maximum entropy (the measure of disorder in a system). Living matter does the opposite. It captures energy from its environment (sunlight, chemical gradients, food) and uses it to maintain and rebuild its own structure.
Erwin Schrödinger called this “feeding on negentropy,” a term he coined in his 1944 book What Is Life? to describe the import of order from the environment: a living organism survives by continuously extracting structured energy and exporting waste heat, thereby keeping its own entropy low while increasing the entropy of everything around it.
Agüera y Arcas calls the result dynamic stability. Think of a bicycle: it stays upright only as long as the rider keeps pedalling and adjusting. Stop, and it falls. A living cell works the same way: it is stable not because it has reached equilibrium, but precisely because it has not. It is constantly burning energy to stay organised. This is the opposite of static stability (the stability of a rock, which simply sits there). It is the active stability of a flame or a whirlpool, systems that persist only because they keep processing energy and matter through themselves. Break the flow, and they disappear. A cell that stops metabolising is not a dormant cell. It is a dead cell, chemistry returning to equilibrium.
What makes living matter more interesting than a flame is that it also copies itself, and those copies carry forward the instructions for maintaining the same dynamic stability. Anything that can do this, however imperfectly, will still be around when everything else has succumbed to entropy. No élan vital, no divine spark, no mysterious substance. Just atoms, organised by billions of years of iterative computation.
3. Dawkins inverted
Reading the von Neumann architecture immediately brought me back to Richard Dawkins.
In The Selfish Gene (1976), Dawkins looks at exactly this two-component system and picks a protagonist: the tape. The gene is the replicator, the organism is the disposable vehicle it builds to ensure its own copying. The organism dies; the gene persists. Elegant, and hugely influential.
But Agüera y Arcas makes a different move. His chapter builds on Lynn Margulis’s symbiogenesis, the theory that new organisms can arise not through gradual mutation but through the merging of distinct organisms into a single cooperative entity. The major leaps in biological complexity (from prokaryotes to eukaryotes, from single cells to multicellular organisms) did not come from genes competing against each other. They came from formerly separate organisms fusing into cooperative wholes. Mitochondria were once free-living bacteria. The eukaryotic cell is a merger, not a mutation.
In von Neumann’s terms, Dawkins says the tape is the master and the machine is the slave. Margulis, and Agüera y Arcas after her, say: the most consequential events in the history of life happened when entire organisms merged with other organisms, becoming something neither could have been alone.
The picture that emerges is that evolution runs on two engines simultaneously. Selection of the fittest in a given environment, the Darwinian mechanism we all learned in school, explains adaptation and refinement within a given level of organisation. Symbiogenesis explains the jumps between levels, the moments where entirely new kinds of organisms appear not through gradual mutation but through merger. Darwin got the first engine right. Margulis added the second.
4. Compression as the logic of evolution
The deepest thread, and the one that connects to everything else in the book.
Information theory offers a precise way to think about what a genome is. Kolmogorov complexity measures the length of the shortest possible programme that can reproduce a given output: the more pattern and structure in the data, the shorter the programme needed to describe it. A genome is, in this framing, a compressed model of the environment in which the organism must survive. It does not store a literal map of the world. It encodes regularities: if temperature drops, grow thicker fur; if light is present, photosynthesise. The organisms that encode more useful structure in fewer genetic bits have an edge, because shorter, tighter code replicates faster and mutates less catastrophically. Natural selection is compression pressure.
A fair objection: if evolution optimises for compression, why do so many apparently inefficient organisms persist? Why do sloths exist?
The answer is that compression is always relative to an environment. A sloth is not poorly optimised. It is exquisitely optimised for a narrow ecological niche: low-energy canopy life where metabolic frugality beats speed. Its genome is a highly compressed model of that specific world. The organism that looks “dumb” from a human vantage point may simply be solving a problem we are not looking at. What evolution cannot tolerate is redundancy without function, code that neither helps nor is cheap enough to carry for free. Everything else is a viable encoding, provided it fits the niche tightly enough. There is no single axis of optimisation. There are as many axes as there are environments.
This reframes intelligence not as a late evolutionary luxury, but as the defining activity of life from the very start. Prediction is compression. Learning is compression. And if that is true, then what large language models do when they are trained on human output is not a metaphor for biological learning. It may be an instance of the same process, running on a different substrate.
Which brings us back to symbiogenesis. Every major transition in the history of life followed the same pattern: two systems that once operated independently became so entangled that neither could function without the other, and a new, more complex entity emerged from the fusion. Prokaryotes swallowed bacteria and became eukaryotes. Single cells aggregated and became organisms. Organisms developed language and became societies. Each time, the prior level of individuality did not disappear. It was subsumed.
If Agüera y Arcas is right, we are watching the next iteration: human cognition and machine cognition becoming progressively inseparable, not because anyone decided it should happen, but because the same evolutionary logic that produced mitochondria is now operating on the partnership between a brain and a large language model. The only question is whether the composite system predicts its environment better than either component alone. So far, the answer seems to be yes.
And the environment is not standing still. Climate pressure is reshuffling ecosystems, collapsing food chains, redrawing the map of what grows where. Available resources, water, arable land, critical minerals, cheap energy, are under compounding stress. Demographic transitions are rewriting the economics of labour and care. The world our institutions were compressed to fit is changing faster than those institutions can adapt. If compression is always relative to an environment, then a rapidly shifting environment is a massive selection event: it rewards the systems, biological or cognitive, that can recompress fastest. That may be the deepest reason why this particular symbiogenesis is accelerating now. Not because the technology is ready. Because the prediction problem has become too large, too fast-moving, and too multidimensional for unaugmented cognition to keep up.
