The Energy Equation: Why Wetware Beats Silicon

The Energy Equation: Why Wetware Beats Silicon
The energy equation: why wetware computes more efficiently than silicon.

Series: Organoid Intelligence | Part: 3 of 11

Your brain runs on about 20 watts. That’s less power than a standard LED lightbulb. Meanwhile, the supercomputers trying to simulate even small portions of neural activity consume megawatts—literally millions of times more energy. This isn’t a detail. This is the central problem of artificial intelligence, and it’s why the future of computing might be biological.

The efficiency gap between biological and digital computation isn’t a matter of better engineering. It’s fundamental. And it’s why organoid intelligence—those “brains in a dish” we explored in previous articles—represents more than a curiosity. They represent a different computational paradigm entirely, one that evolution spent billions of years optimizing for a problem silicon was never designed to solve.


The Power Wall

In 2024, OpenAI’s GPT-4 training run consumed an estimated 50 gigawatt-hours of electricity. That’s enough to power a small city for months. For a single model. The inference costs—actually running the model to generate responses—add millions more in daily energy expenditure. Scale this across the entire AI industry, and you’re looking at a carbon footprint approaching that of entire nations.

This is what engineers call hitting the power wall: the point where energy consumption becomes the limiting constraint on computational progress. We reached it with traditional silicon around 2005, when clock speeds stopped increasing because chips were literally getting too hot to run faster. AI has hit it again, harder this time.

The human brain, by contrast, performs computations that dwarf current AI systems while sipping energy at rates that make silicon look almost comically inefficient. A brain processes sensory data, maintains homeostasis, generates consciousness, learns continuously, and orchestrates complex behavior—all on the energy budget of a dim lightbulb.

How is this even possible?


The Architecture of Efficiency

The answer lies not just in the hardware, but in the fundamental architecture of biological computation. Silicon computers separate memory and processing—the infamous von Neumann bottleneck. Data must shuttle back and forth between RAM and CPU, a process that wastes enormous amounts of energy and time.

Neurons don’t do this. In biological systems, memory and processing are the same thing. A synapse is simultaneously a connection strength (memory) and a computational element (processing). When a neuron fires, it’s reading and writing and computing all at once. There’s no bus, no separate memory bank, no shuttling of data across physical gaps.

This architecture, called in-memory computing when engineers try to replicate it in silicon, is inherently more efficient. But neurons go further. They’re analog devices that exploit physics directly—ion gradients, membrane potentials, molecular cascades—rather than switching discrete transistors on and off billions of times per second.

Consider the energy accounting. A single transistor switching consumes about 10^-17 joules. That sounds tiny until you realize a modern processor contains billions of transistors switching billions of times per second. Meanwhile, a synaptic transmission—the basic computational unit in the brain—consumes roughly 10^-11 joules. That’s actually more energy per event than a transistor switch.

But here’s where the story inverts: neurons are massively parallel and operate at much lower frequencies. Your brain contains roughly 86 billion neurons, each making thousands of synaptic connections, but firing at rates measured in hundreds of Hertz, not gigahertz. The parallelism is so extreme that despite higher per-event energy costs, the overall system is far more efficient for the kinds of problems brains evolved to solve.


What Biological Computation Actually Optimizes

Silicon is fast at arithmetic. Neuromorphic hardware—including organoids—is efficient at pattern recognition, association, and prediction under constraints. These aren’t the same capability space.

A CPU can multiply two billion-digit numbers faster than you can blink. But ask it to recognize your grandmother’s face from a side angle in dim lighting while she’s wearing sunglasses, and suddenly you need massive neural networks consuming warehouse-scale power. Your visual cortex does this effortlessly, automatically, while also keeping you breathing.

The efficiency advantage of biological computation shows up precisely where silicon struggles: high-dimensional pattern recognition in noisy, ambiguous environments under extreme energy and space constraints. This is what brains optimized for over hundreds of millions of years. Not arithmetic. Not symbol manipulation. Survival.

Evolution never encountered the problem of multiplying billion-digit numbers. It encountered the problem of “is that shape in the shadows a predator or a branch?” Biological computation is exquisitely optimized for that question and its ten thousand relatives. Silicon is exquisitely optimized for the problems we built it to solve—but those problems turned out to be a small subset of what we actually need intelligence to do.


The Numbers Behind the Gap

Let’s quantify this. Recent research from Johns Hopkins and other institutions working on organoid intelligence suggests that biological computation could achieve performance comparable to current AI systems at energy costs reduced by six orders of magnitude. That’s not 6%. That’s a factor of 1,000,000.

Here’s the comparison table that should make every AI researcher pay attention:

System Power Consumption Operations/Second Efficiency (Ops/Joule)
GPT-4 (inference) ~10 kW ~10^17 ~10^13
Human brain ~20 W ~10^16 (estimated) ~5×10^14
Organoid (projected) ~10^-3 W ~10^12 (early stage) ~10^15

Those numbers aren’t typos. A mature organoid system could theoretically operate at 100 times the efficiency of the human brain and 100,000 times the efficiency of current AI systems. Even if the actual realization falls short by an order of magnitude or two, the gap is transformative.

But this comes with a catch: biological computation is slower. Individual operations happen at millisecond timescales, not nanosecond. You can’t simply swap organoids into a datacenter and expect to serve millions of requests per second. What you can do is rethink what problems we’re trying to solve and how we’re trying to solve them.


The Real-World Constraint

Energy isn’t just an optimization target—it’s a hard constraint that determines what’s physically possible to deploy. Consider the practical boundaries we’re hitting:

Datacenter Limits: Major tech companies are already negotiating with power utilities for dedicated substations. Microsoft signed a deal to reactivate a nuclear reactor unit to power AI operations. Google’s AI divisions now consume more electricity than entire countries like Uruguay. These aren’t incremental increases—they’re fundamental infrastructure challenges.

Geographic Bottlenecks: You can’t just build datacenters anywhere. You need proximity to power generation, cooling water, and network backbone. This concentrates AI capabilities in specific regions, creates geopolitical dependencies, and makes distributed intelligence practically impossible at scale.

Cost Explosion: Training costs for frontier models are approaching $100 million per run, with energy representing an increasing fraction of that budget. As models scale, we’re rapidly approaching the point where only a handful of entities globally can afford to train cutting-edge systems. This isn’t the democratization of AI—it’s the opposite.

The biological efficiency advantage isn’t just impressive on paper. It’s the difference between AI as a centralized utility that requires planetary-scale infrastructure and AI as a distributed capability that can run anywhere on local power budgets.


Why Efficiency Matters Beyond Power Bills

The energy equation isn’t just about reducing costs or carbon emissions, though both matter. It’s about what becomes possible.

Edge computing—intelligence running on local devices rather than in distant datacenters—requires radical efficiency improvements. You cannot put GPT-4 on a robot that needs to navigate unpredictable terrain for hours on battery power. You could potentially put organoid-based intelligence there, especially if hybrid architectures combine silicon’s speed with wetware’s efficiency.

Space exploration faces even more extreme constraints. Every watt of power in a spacecraft is precious. The difference between 20 watts and 10 kilowatts isn’t a line item in a budget—it’s the difference between feasible and impossible. If we want machines that can truly learn and adapt in environments where human oversight is delayed by light-minutes or light-hours, biological computation starts looking less like science fiction and more like engineering necessity.

Medical implants represent another frontier where the energy equation dominates. Brain-computer interfaces that could restore mobility to paralyzed patients or sight to the blind are currently limited by how much power you can safely deliver to neural tissue and how much heat you can dissipate in a closed skull. Organoid-based neural prosthetics could operate within biological energy budgets because they are biological.


The Metabolic Substrate Problem

But there’s a deeper issue that the raw energy numbers don’t capture: what counts as “power consumption” depends on what substrate you’re working with.

Silicon runs on electricity. Clean, transportable, easy to regulate and meter. Neurons run on glucose and oxygen, delivered through blood or culture medium, waste products removed, pH balanced, temperature maintained. The 20 watts your brain consumes is just the glucose metabolism. The full metabolic cost includes circulatory system energy, temperature regulation, immune surveillance, and continuous cellular maintenance.

Organoids face these same challenges. That projected 10^-3 watt power consumption assumes perfect culture conditions—continuous nutrient flow, waste removal, oxygenation. The actual energy budget must include the bioreactor system maintaining those conditions. Early organoid computing systems will likely require support infrastructure that negates much of the theoretical efficiency advantage.

This is where current research efforts concentrate: not just growing organoids that can compute, but engineering life-support systems efficient enough to preserve the biological efficiency advantage. It’s a problem that sits somewhere between computer engineering and tissue engineering, and it’s why organoid intelligence remains a frontier science rather than a deployable technology.


Hybrid Futures

The smart bet isn’t on organoids replacing silicon entirely. It’s on hybrid architectures that leverage biological and digital computation for what each does best.

Imagine a system where silicon handles high-speed arithmetic, memory storage, and I/O operations—the things it excels at. Meanwhile, organoid components handle pattern recognition, contextual understanding, and adaptive learning—the things biological systems do efficiently. The interface between them becomes the critical design challenge, which we’ll explore in depth later in this series.

This isn’t unprecedented. Your body already runs hybrid computation: the conscious, verbal processing in your cortex working alongside the unconscious, procedural processing in your basal ganglia and cerebellum, all coordinated by brainstem and thalamic switching. Integration across computational substrates is possible. We just need to figure out how to do it when one substrate is carbon-based and the other silicon-based.

Early hybrid systems are already in development. The DishBrain project, which we’ll examine in a later article, connects organoids to digital sensors and actuators, creating a cyborg system that learns to play Pong using wetware for the learning and silicon for the interface. The efficiency gains are still modest—the digital components dominate the power budget. But the proof of concept matters: biological and artificial computation can couple.


The Thermodynamic Argument

There’s a theoretical argument here that goes beyond current engineering. The Landauer limit—the minimum energy required to erase one bit of information—sits at about 10^-21 joules at room temperature. This is a thermodynamic floor, not an engineering target. It means there’s a hard limit to how efficient digital computation can become.

Biological systems don’t erase bits. They reconfigure patterns. They operate in continuous state spaces where information transforms rather than disappears. This isn’t a way around thermodynamics—nothing is. But it suggests that biological computation accesses a different region of computational phase space, one where the relevant limits look different.

Neurons are fundamentally stochastic—they operate with noise, exploit randomness, and maintain function in the face of constant perturbation. Digital systems spend enormous energy fighting noise, maintaining clean binary states, error-correcting to preserve information. Brains do something else: they construct stable macroscale patterns from messy microscale dynamics. The noise becomes a feature, not a bug.

This is what free energy minimization looks like at the computational level—the same principle Karl Friston uses to explain brain function. Systems minimize surprise not by achieving perfect control but by building models that expect and integrate variability. The energy savings come from not fighting entropy but surfing it.


What This Means for AI Timelines

If organoid intelligence delivers on even a fraction of its efficiency promise, the implications for AI development are profound. Current scaling laws assume continued exponential growth in compute, which increasingly means exponential growth in energy consumption. That’s unsustainable—literally. We’re already seeing calls for AI datacenters to have dedicated power plants.

Biological computation offers a different scaling path: architectural efficiency instead of raw compute. Instead of throwing more GPU clusters at the problem, we grow more sophisticated neural tissue. Instead of training larger models, we culture learning systems that generalize better from fewer examples.

This isn’t just a technical shift. It’s a paradigm shift in how we think about intelligence itself. Current AI is intelligence as search through vast possibility spaces using brute force. Biological intelligence is intelligence as efficient prediction under uncertainty using evolved inductive biases. They’re different games.

Organoid intelligence won’t replicate ChatGPT. It might, however, enable embodied systems that learn like children do—slowly, continuously, multimodally, efficiently—rather than systems that require billions of training examples and petawatts of compute.


The Coherence Connection

In AToM terms, this is a story about efficiency as a measure of coherence. Systems that minimize energy expenditure per unit of adaptive function are systems that have solved the problem of coordinated action under constraint. Coherence is precisely this: trajectories through state space that maintain structure over time without wasting energy fighting against themselves.

The brain’s efficiency isn’t incidental. It’s evidence of deep optimization for the fundamental computational problem of biological existence: predicting and responding to environmental dynamics while maintaining a far-from-equilibrium dissipative structure (staying alive). The energy equation reveals what the system has been optimized for.

Silicon optimizes for speed and precision in discrete operations. Biology optimizes for robustness and efficiency in continuous prediction. The efficiency gap is a readout of different optimization targets, different regions of computational phase space, different solutions to the problem of what substrate and architecture enable adaptive behavior.

When we build organoid computing systems, we’re not just creating a new technology. We’re accessing a different computational attractor—one that evolution found and we’re only now beginning to understand how to implement artificially.


Further Reading

  • Cai, H., et al. (2023). “Brain organoid reservoir computing for artificial intelligence.” Nature Electronics.
  • Smirnova, L., et al. (2023). “Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish.” Frontiers in Science.
  • Mehonic, A., & Kenyon, A. J. (2022). “Brain-inspired computing needs a master plan.” Nature.
  • Laughlin, S. B., & Sejnowski, T. J. (2003). “Communication in neuronal networks.” Science.
  • Marblestone, A. H., et al. (2016). “Toward an integration of deep learning and neuroscience.” Frontiers in Computational Neuroscience.

This is Part 3 of the Organoid Intelligence series, exploring the frontier of biological computing and what it means for the future of AI.

Previous: How to Grow a Brain: The Science of Cerebral Organoids Next: Teaching Organoids: How Brain Tissue Learns