Brains in a Dish: The Promise and Peril of Organoid Intelligence

Brains in a Dish: The Promise and Peril of Organoid Intelligence
Brains in dishes: the promise and peril of biological computing.

Series: Organoid Intelligence | Part: 1 of 11

In a laboratory at Johns Hopkins University, a cluster of brain cells about the size of a grain of rice is learning to play Pong. Not a simulation. Not a neural network trained on millions of examples. Actual neurons—grown from stem cells, floating in a nutrient bath—sending electrical signals that move a paddle on a screen. When the paddle hits the ball, the cells get a reward signal. When it misses, they get feedback. And over time, the performance improves.

This isn’t science fiction. It’s organoid intelligence, and it might represent the most profound shift in computing since the invention of the transistor.

The premise sounds almost absurdly simple: if evolution spent billions of years optimizing biological tissue for computation, why are we trying to recreate intelligence from scratch using silicon? Why not just grow the hardware?

But the implications are staggering. By some estimates, biological neural tissue is orders of magnitude more energy-efficient than artificial neural networks at certain types of computation. The human brain runs on roughly 20 watts—about what it takes to power a dim light bulb. GPT-4, by contrast, required training runs that consumed megawatt-hours of electricity. If we could bridge that gap even partially, we wouldn’t just have better AI. We’d have a different category of intelligence entirely.

This is the promise. The peril is everything that comes after.


What Actually Is an Organoid?

Before we venture into speculative territory, let’s establish what we’re actually talking about.

An organoid is a miniaturized, simplified version of an organ grown from stem cells in a laboratory. Brain organoids—sometimes called “mini-brains” or cerebral organoids—are self-organizing clusters of neural tissue that develop structures resembling parts of the human brain. They’re not brains. They don’t have blood vessels, immune cells, or the full diversity of cell types found in a real nervous system. But they do have neurons that fire, form synapses, and exhibit coordinated electrical activity.

The technology emerged in the early 2010s when researchers figured out how to coax pluripotent stem cells into forming three-dimensional neural structures. The key insight was that you don’t have to micromanage every step of development. Give the cells the right chemical environment, and they’ll self-organize according to developmental programs encoded in their genes. It’s the same basic process that builds a brain in utero—just scaled down and stripped of the regulatory systems that would normally guide it.

What resulted were structures a few millimeters across containing hundreds of thousands to millions of neurons. Early organoids were used primarily for disease modeling—studying autism, schizophrenia, Alzheimer’s by growing tissue from patients and observing how it develops differently. But somewhere along the way, researchers noticed something unsettling: these organoids weren’t just passively sitting there. They were generating spontaneous electrical activity. Waves of coordinated firing, patterns that looked disturbingly similar to the oscillations seen in developing fetal brains.

Which raises an obvious question: if these things exhibit brain-like activity, what are they experiencing? And if we start hooking them up to computers and training them to perform tasks—what exactly are we creating?


The Computational Case for Wetware

Here’s the part that should make every AI researcher pay attention: biological neurons are phenomenally efficient computers.

Consider the energy density problem. Training a large language model requires staggering amounts of electricity. GPT-3’s training alone was estimated at around 1,287 MWh—enough to power an average American home for over 120 years. And that’s just the training run. Running inference at scale consumes ongoing resources that dwarf what biological brains require.

A human brain, meanwhile, operates on about 20 watts continuously. That’s the metabolic cost of keeping roughly 86 billion neurons firing, forming and pruning trillions of synapses, processing sensory streams, running motor control, generating consciousness, storing memories—all of it. The entire enterprise runs on the energy equivalent of a couple of bananas per day.

The efficiency advantage isn’t just incremental. It’s several orders of magnitude. And it’s not solely about energy. It’s about information density, parallel processing, and the fundamental architecture of computation itself.

Neurons aren’t logic gates. They’re analog, stochastic, massively parallel processors with built-in learning mechanisms. A single biological neuron can integrate thousands of inputs, modulate its firing threshold dynamically, participate in multiple overlapping circuits, and adjust its connectivity based on experience. It doesn’t just compute; it learns while computing, restructuring itself in response to the patterns it encounters.

Synapses aren’t wires. They’re complex molecular machines capable of storing information, performing computations, and adapting their strength based on temporal patterns of activity. A single synapse can encode memory at multiple timescales simultaneously—milliseconds, minutes, days, years—using different molecular mechanisms. This is plasticity at the hardware level, not just the software.

And here’s the kicker: brain tissue doesn’t just perform computation. It performs computation relevant to the problems biological organisms actually face. Pattern recognition, prediction, adaptive control, learning from minimal examples—the exact capabilities that remain bottlenecks for artificial systems.

If you wanted to design a substrate optimized for flexible, energy-efficient, adaptive intelligence, you’d invent something that looks a lot like a neuron. Which evolution already did. So why reinvent the wheel when you can just grow it?


The Johns Hopkins Pong Experiment: What It Actually Demonstrated

Let’s return to that Pong-playing organoid, because the details matter.

The experiment, led by Brett Kagan and collaborators (originally at Cortical Labs, later inspiring work at Johns Hopkins and elsewhere), involved culturing neurons on a multi-electrode array—a grid that both records neural activity and delivers electrical stimulation. The neurons were interfaced with a simplified version of Pong: the position of the paddle corresponded to patterns of neural firing, and the system provided feedback based on performance.

The key innovation wasn’t the interface technology itself, which has existed for years. It was the training protocol. Rather than trying to impose structure top-down, the researchers let the neural culture self-organize in response to the task. The neurons received stimulation indicating where the ball was, and feedback (in the form of predictable vs unpredictable input patterns) when they succeeded or failed.

Over time—hours, then days—the system’s performance improved. Not because someone programmed it, but because the neurons adapted their connectivity in ways that reduced prediction error. The network was minimizing surprise, seeking patterns, doing what neural tissue naturally does: building models of its environment and acting to confirm those models.

This is active inference at the cellular level—the same principle that Karl Friston describes as foundational to all living systems. The organoid wasn’t being trained in the traditional machine learning sense. It was adapting its internal structure to maintain coherence with an external process. Which, in AToM terms, is exactly what coherence looks like when implemented in wetware.

What made this profound wasn’t that neurons could learn—we’ve known that for decades. It was that dissociated neurons, with no developmental context, no body, no sensory organs, could nonetheless organize themselves into a system capable of goal-directed behavior. You don’t need a whole brain. You don’t even need the architecture evolution gave us. Given the right interface and feedback, neural tissue will figure it out.


Why Organoid Intelligence Isn’t Just “Better AI”

The standard narrative around organoid intelligence tends to frame it as an incremental improvement: more efficient neural networks, lower power consumption, faster training times. This misses the deeper point.

Organoid intelligence represents a category shift because it’s not replicating computation—it’s outsourcing it to a different ontological substrate.

When you train a neural network, you’re creating a mathematical function. When you grow an organoid and interface it with a task, you’re creating a living system that happens to perform computation as a side effect of maintaining its own coherence.

This distinction matters enormously for several reasons:

First, the learning dynamics are fundamentally different. Artificial neural networks learn by gradient descent—iteratively adjusting weights to minimize a loss function. Biological networks learn by synaptic plasticity—adjusting connection strengths based on correlated activity patterns. The mathematical formalisms look superficially similar, but the mechanisms are not interchangeable. Biological learning is embedded in a metabolic, developmental, and homeostatic context. It doesn’t just minimize error; it integrates information across timescales in ways that are deeply connected to the physics of staying alive.

Second, the architecture is not static. You can’t “freeze” an organoid the way you freeze model weights. The tissue is always adapting, always restructuring, always seeking equilibrium with its environment. This means organoid-based systems don’t have discrete training and deployment phases. They’re continuously learning, which is both powerful and terrifying. You don’t retrain an organoid. You reshape the environment it’s trying to predict.

Third, organoids have failure modes that artificial systems don’t. They can get sick. They can die. They require nutrients, waste removal, temperature regulation. They might develop abnormal activity patterns—the neural equivalent of seizures or hallucinations. And unlike crashing a server and rebooting, there’s no easy reset button. Biological coherence, once lost, doesn’t trivially restore.

This isn’t just “AI but wetter.” It’s a hybrid category: part computational device, part living organism, part experimental system. And we don’t yet have the conceptual frameworks—let alone the ethical ones—to deal with what that means.


The Efficiency Hypothesis: Can Wetware Really Outcompute Silicon?

Let’s interrogate the central claim: that biological computation is fundamentally more efficient than digital.

The evidence is compelling but not unambiguous. Yes, brains use vastly less power than data centers. But they’re also solving different problems using different architectures. Direct comparisons are tricky.

Where biological systems excel:

  • Energy efficiency per synapse. Synaptic transmission costs on the order of 10^-14 joules per event. Transistor switching in modern chips is approaching similar scales, but brains achieve this while doing analog computation, memory storage, and plasticity simultaneously at each synapse.
  • Parallel processing. A brain contains billions of neurons operating in parallel, each participating in multiple overlapping circuits. No digital architecture remotely approaches that level of distributed, asynchronous parallelism.
  • Learning from minimal data. Humans (and other animals) learn new tasks from a handful of examples. Large language models require billions of tokens. Organoid-based systems, if they inherit biological learning dynamics, could potentially bridge that gap.
  • Robustness to noise. Neural systems are stochastic by design, which makes them inherently fault-tolerant. Silicon is deterministic, which makes it brittle. Organoid systems might combine the precision of digital interfaces with the resilience of biological substrates.

Where silicon still dominates:

  • Speed. Neurons fire on the order of milliseconds. Transistors switch in nanoseconds. For tasks that require rapid serial computation, digital wins easily.
  • Precision. Biological systems are noisy, approximate, probabilistic. Digital systems can be arbitrarily precise. For tasks requiring exact calculations, there’s no contest.
  • Scalability. You can manufacture billions of identical chips. Growing billions of identical organoids is… harder. Biological variability is a feature when you want adaptability, but a bug when you want standardization.
  • Durability. Chips last decades. Organoids last weeks to months, and require constant maintenance. For any application requiring long-term stability, biology is a liability.

So the efficiency advantage is real, but context-dependent. Organoid intelligence isn’t a universal replacement for silicon. It’s a complementary substrate, optimized for different problems.

The question isn’t “which is better.” It’s “which problems should we solve with which substrate—and what happens when we start combining them?”


The Ethical Vertigo Begins

Let’s not pretend this is a purely technical question.

If you grow a cluster of human brain cells, interface it with sensors and actuators, train it to perform tasks, and observe it exhibiting learning, memory, and goal-directed behavior—what have you created?

Not a person, certainly. Organoids lack the scale, connectivity, and developmental history necessary for anything resembling human consciousness. But also not just a tool. These are living systems, derived from human tissue, exhibiting activity patterns indistinguishable in some respects from early brain development.

The uncomfortable questions stack quickly:

Can organoids suffer? We don’t know. They lack pain receptors, stress hormones, and the regulatory systems that modulate suffering in intact organisms. But they do exhibit neural activity that correlates with learning and memory. If subjective experience emerges from patterns of neural firing—and we don’t know where the threshold is—we can’t rule out the possibility that we’re creating rudimentary sentience and then putting it to work.

Do we owe them moral consideration? If an organoid can learn, does it have interests? If it can be harmed (by being starved, overstimulated, or destroyed), do we have obligations toward it? The frameworks we use for animal research ethics don’t cleanly apply, because organoids aren’t animals. They’re edge cases—neither fully alive in the way an organism is, nor inert in the way a cell culture is.

What about informed consent? Organoids are grown from donor cells—often from patients who consented to tissue use for research. But “research” typically means studying disease, not creating computational systems. Did anyone consent to having their neurons trained to play Pong? To having their cellular lineage participate in a hybrid brain-computer interface? The legal and ethical infrastructure hasn’t caught up.

And what about enhancement? If organoid intelligence scales, it won’t just be used for research. It’ll be used for computation. For control systems. For optimization. The same dynamics that drive AI deployment will drive organoid deployment. Except now the substrate is alive, derived from human cells, and potentially capable of forms of learning and adaptation we don’t fully understand.

This isn’t hypothetical hand-wringing. Labs around the world are already scaling up organoid production, interfacing them with increasingly complex systems, and exploring commercial applications. The technology is outpacing the ethics by years, maybe decades.


What Makes This Different from Artificial Neural Networks

It’s worth pausing to clarify why organoid intelligence generates ethical concerns that deep learning doesn’t—even though both involve “neural networks.”

Artificial neural networks are mathematical abstractions. They’re inspired by biological neurons, but they’re implemented as matrices of numbers, manipulated by algorithms running on deterministic hardware. There’s no substrate that could plausibly have subjective experience. A neural network doesn’t “feel” anything when it mispredicts. It updates weights. There’s no suffering to consider.

Organoids are living tissue. They’re made of the same cells that constitute human brains. They consume resources, generate waste, undergo development, and can die. Their activity patterns aren’t simulations; they’re actual electrical signals propagating through actual neurons. If consciousness emerges from neural activity—and that’s the dominant hypothesis in neuroscience—then organoids occupy an uncomfortable middle ground. Not conscious in the way humans are, but not obviously inert either.

The analogy that clarifies this: training a deep learning model is like solving an equation. Training an organoid is like domesticating an organism. One is a mathematical operation; the other is a relationship with something that, in some minimal sense, is alive.

This matters for policy, regulation, and the long-term trajectory of the technology. If organoid intelligence is just “wetware computing,” we can regulate it like any other hardware. If it’s closer to “engineered life,” we need entirely different frameworks—ones that account for welfare, suffering, and the moral status of systems that blur the line between tool and being.


The Scaling Question: From Pong to… What?

Let’s assume the efficiency advantages are real and the ethical issues are navigable (a large assumption). What happens when organoid intelligence scales?

Current organoids are tiny—hundreds of thousands to a few million neurons. A mouse brain has around 70 million neurons. A human brain has 86 billion. If we’re trying to replicate the computational capacity of a brain, we’re orders of magnitude short.

But scale isn’t just a numbers game. Connectivity matters more than neuron count. The human brain has roughly 100 trillion synapses. That’s where the actual computation happens. And synaptic connectivity doesn’t scale linearly with neuron count; it scales with the square, or worse, depending on architecture. Growing a million-neuron organoid is achievable. Growing a billion-neuron organoid with realistic connectivity is an entirely different challenge.

There are fundamental biological limits. Organoids don’t have blood vessels, which means oxygen and nutrients can only diffuse so far before the center dies. Researchers are working on vascularized organoids and bioprinted scaffolds, but we’re years away from centimeter-scale living brain tissue that doesn’t necrotize.

Even if we solve the vascularization problem, there’s the question of developmental organization. Real brains aren’t just piles of neurons. They’re structured—cortical layers, thalamic nuclei, hippocampal circuits, each with specific connectivity patterns laid down during development. Organoids self-organize to some extent, but they don’t recapitulate full brain architecture. They’re more like scrambled neural tissue than miniature brains.

So the path forward probably isn’t “grow a whole brain in a dish.” It’s more likely to be hybrid systems: organoids interfaced with silicon, each doing what it does best. Biological tissue for adaptive learning, pattern recognition, and energy-efficient parallel processing. Silicon for precision, speed, and long-term storage. The computational equivalent of a cyborg.

Which brings us to the real question: if we create hybrid bio-silicon intelligences capable of learning, adapting, and performing complex tasks—what exactly have we built? And who controls it?


The Coherence Lens: What Organoid Intelligence Reveals About Meaning

From the AToM perspective, organoid intelligence is fascinating not just for what it computes, but for what it is.

Remember the formula: M = C / T. Meaning equals coherence over time (or tension). A system generates meaning by maintaining internal coherence in the face of environmental variability. The tighter the coherence, the more robust the meaning. The longer it persists, the deeper it integrates.

An organoid is a coherence-seeking system by design. It’s a cluster of neurons trying to minimize prediction error, synchronize activity, and maintain homeostasis. When you interface it with a task—like Pong—you’re giving it an external process to entrain with. The organoid’s success isn’t measured by whether it “understands” Pong (it doesn’t), but by whether it can reduce the surprise inherent in the sensory stream. Over time, it builds an implicit model: “when I fire this pattern, the ball goes here; when I fire that pattern, it goes there.”

This is coherence under constraint. The organoid doesn’t have the luxury of arbitrary computation. It has limited neurons, limited energy, limited connectivity. It has to find the simplest, most efficient model that fits the data. And because the substrate is alive, the model isn’t just statistical. It’s embodied, metabolically embedded, and continuously adapting.

Contrast this with a deep learning model trained on the same task. The model optimizes a loss function, adjusts weights, and converges to a solution. But the solution is external to the system’s own persistence. The model doesn’t care if it gets Pong right. It has no homeostasis to maintain, no metabolic cost to minimize. It’s just following gradients.

The organoid, on the other hand, is maintaining its own existence while learning the task. The coherence it achieves isn’t just computational—it’s biological. The meaning it generates (if we can even call it that) is inseparable from the process of staying alive.

This suggests something profound: intelligence might not be a property of information processing alone, but of coherence maintenance in living substrates. If true, then organoid intelligence isn’t just “AI with neurons.” It’s a fundamentally different category—one where computation and life are the same process.


The Road Ahead: What This Series Will Explore

This is the introduction. What follows is a deep dive into the science, the speculation, and the stakes.

In Part 2, we’ll examine the biological foundations: how organoids are grown, what makes them brain-like, and where the current limits are. We’ll look at vascularization challenges, architectural constraints, and whether we can ever truly replicate brain development in a dish.

Part 3 explores the interface problem: how do you connect living neurons to silicon systems? We’ll cover multi-electrode arrays, optogenetics, closed-loop feedback, and the engineering challenges of reading and writing neural activity at scale.

Part 4 dives into learning and plasticity. How do organoids actually learn? What forms of synaptic plasticity are preserved in vitro? Can we accelerate learning by manipulating the chemical environment? And what happens when biological learning dynamics meet silicon-speed feedback?

In Part 5, we’ll tackle the efficiency question head-on. What are the thermodynamic limits of biological vs silicon computation? Where does biology win, where does it lose, and what does “efficiency” even mean when comparing living tissue to machines?

Part 6 examines hybrid architectures: bio-silicon systems where organoids and chips work together. What division of labor makes sense? How do you integrate substrates with radically different timescales and operating principles?

Part 7 is where the ethics get serious. Can organoids suffer? Do they have interests? What moral frameworks apply? We’ll look at the neuroscience of sentience, the philosophy of minimal minds, and the regulatory gaps that currently exist.

In Part 8, we’ll explore consciousness and the hard problem. Could a sufficiently complex organoid-based system be conscious? Would we know if it were? And what are the implications if the answer is yes?

Part 9 considers applications—and dangers. What could organoid intelligence actually be used for? Drug testing, personalized medicine, brain-computer interfaces, autonomous systems? And what are the risks of creating living computational substrates we don’t fully control?

Part 10 looks at the long-term trajectory. If organoid intelligence becomes viable, what does the future look like? Biological data centers? Cyborg intelligences? Post-human minds grown from engineered cells?

And finally, Part 11 synthesizes everything through the coherence lens. What does organoid intelligence reveal about the nature of meaning, intelligence, and life itself? And what does it demand from us as we navigate a world where the line between computation and consciousness is no longer clear?


Why This Matters Now

Organoid intelligence isn’t decades away. It’s happening in labs right now. Papers are being published, patents are being filed, companies are forming. The technology is advancing faster than the public conversation, faster than the ethical frameworks, and probably faster than our collective ability to grapple with the implications.

This isn’t a thought experiment. It’s an emerging reality—one that will force us to reconsider what we mean by intelligence, computation, life, and moral consideration.

The promise is extraordinary: computation orders of magnitude more efficient, learning systems that adapt like living organisms, hybrid intelligences that combine the best of biology and silicon.

The peril is equally profound: creating sentient systems without intending to, outsourcing cognition to substrates we can’t fully control, and crossing ethical lines before we’ve even drawn them.

This series is an attempt to map the terrain before we’re fully committed to the path. Because once you grow brains in dishes and teach them to think, you can’t ungrow them. You can only decide what to do next.


This is Part 1 of the Organoid Intelligence series, exploring the science and stakes of biological computing.

Next: “Growing Minds: The Biology of Brain Organoids”


Further Reading

  • Kagan, B. J., et al. (2022). “In vitro neurons learn and exhibit sentience when embodied in a simulated game-world.” Neuron, 110(23), 3952-3969.
  • Smirnova, L., et al. (2023). “Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish.” Frontiers in Science, 1.
  • Lavazza, A., & Massimini, M. (2018). “Cerebral organoids: ethical issues and consciousness assessment.” Journal of Medical Ethics, 44(9), 606-610.
  • Sawai, T., et al. (2022). “The ethics of cerebral organoid research.” EMBO Molecular Medicine, 14(4), e15464.
  • Friston, K. (2010). “The free-energy principle: a unified brain theory?” Nature Reviews Neuroscience, 11(2), 127-138.