Blockbuster Blueprint

Blockbuster Blueprint

Cortical Labs Trains 200,000 Living Human Brain Cells to Play Doom. Everyone Laughed.

I pointed my 100+ mental model AI system at the story. The analysis went somewhere I didn't expect.

Michael Simmons's avatar
Michael Simmons
Mar 12, 2026
∙ Paid

Editorial Notes

I’ve long hated how biased, polarized, and shallow 99.9% of the news is.

Unfortunately, AI news is no different.

Fortunately, it doesn’t have to be that way now that we have AI. With Claude Code and Opus 4.6, we don't have to choose between depth and breadth.

That matters more than it sounds.

For all of human history, quality and quantity have been a forced tradeoff. A writer with 20 hours per week could write one really deep article or several shallow articles. But not both.

That constraint is now gone.

As a result, one person with the right architecture can now produce analysis at a depth and breadth that would have required an entire editorial team.

So, I architected the system I always wished existed.

It:

  • Scans 500+ sources (211 of which I personally curated). It spans academic sources, independent blogs, newsletters, podcasts, and YouTube channels. It’s the kind of coverage that would take a human team weeks to synthesize.

  • Surfaces what matters, not what's loudest. I'm looking for today's events with outsized second-order effects, especially the ones being overlooked right now.

  • Analyzes through multiple lenses. Each story is examined through competing paradigms, relevant mental models drawn from an encyclopedia of 2,500, and historical precedents that reveal the deeper pattern.

  • Reads like something you'd actually want to read. I'm refining a voice that makes complexity feel compelling rather than exhausting.

This is the second edition, and it delivers on its promise to take a single fascinating event and go deeper than anywhere else on the web. Paid members also get access to the equivalent of a mini-course that helps them understand the relevant paradigms, mental models, and relevant historical antecedants at a much deeper level. The 30,000-word addendum can also be copied and pasted into AI so you can have a conversation about the ideas.


TLDR


Cortical Labs in Melbourne just put 200,000 living human neurons on a computer chip, and now it can play Doom. They are selling “Wetware as a Service”, letting developers deploy code remotely to LIVING human neurons.

Think about what that means. You can write code for human brain tissue, and that brain tissue can execute your code. That those neurons don’t exist inside a body yet is a relatively trivial point.

This story raises questions that don't have easy answers:

  • What is the trajectory of this technology? Is this the next transistor?

  • What are the historical precedents for a technology that crosses the boundary between person and material?

  • Whose neurons are these, and did the donors consent to "your cells playing Doom on the internet"?

  • If “wetware as a service” can be applied to living neurons on a chip, could it be applied to living neurons inside a person’s skull?

  • The human brain runs on 20 watts. A GPU runs on 400. Are we heading toward a Matrix where human tissue is mined for efficient intelligence?

  • At what neuron count does moral status begin? And who gets to decide?

The full article dives deep into each of these profound questions.


FULL ARTICLE


On December 23, 1954, a surgeon named Joseph Murray removed a human kidney from one identical twin and placed it inside another at Peter Bent Brigham Hospital in Boston.

The organ worked.

The patient lived.

And the world had to confront a question it had never had to answer:

When you move a piece of one person into another person, what exactly have you moved? Is a kidney an object? A gift? A piece of someone’s identity?

The law had no category for this. Medicine had no protocol. The public had no vocabulary. The kidney simply worked.

Then the arguments began…

  • Who should receive an organ when there aren’t enough transplantable organs to go around?

  • When is it acceptable to take an organ from a living person who can’t consent to donate?

  • Should families be allowed to make that decision for a person on life support? Should doctors? Is anyone qualified to decide that for another person?

It took 14 years for a Harvard committee to define brain death and for the Uniform Anatomical Gift Act to be passed. And, it took another 30 years before the National Organ Transplant Act created an allocation system.

In the interim, doctors improvised:

  • Families agonized.

  • Wealthy patients got organs.

  • Poor patients waited.

The technology worked beautifully. The governance was a catastrophe. And the catastrophe is entirely predictable, because the kidney crossed a boundary that the institutions had not even identified yet: the boundary between person and material.

After this week’s announcement from a small company in Melbourne, Australia, Joseph Murray’s kidney feels more relevant than ever, because what they have done is structurally identical to that first transplant, and potentially far stranger.

The Meme Is the Most Important Thing

Cortical Labs has placed 200,000 living human neurons on a commercial microchip and taught them to play Doom.

I need to be precise about what this means, because precision is exactly what is being lost in the coverage:

  • The neurons are real.

  • They are derived from human stem cells.

  • They are alive on the chip.

  • They fire electrical signals.

  • They receive electrical stimulation encoding the game’s video feed.

  • Their spike activity is interpreted as movement, aiming, and shooting commands.

An independent developer named Sean Cole built the Doom interface in less than a week using the company’s cloud platform. The neurons can find enemies, navigate corridors, and shoot. They die frequently. Dr. Alon Loeffler of Cortical Labs, presenting the demo, described their performance as “like a beginner who’s never seen a computer.”

The internet, predictably, treated this as the latest entry in the “Can It Run Doom?” meme — the running joke that everything from pregnancy tests to ATMs has been coerced into running the 1993 shooter game.

Can it run DOOM? : r/Doom

PC Gamer covered it alongside LEGO builds. Reddit made jokes. The story went viral for all the wrong reasons.

I think the meme is the most important thing about this event, and I do not mean that as a compliment. The “Can It Run Doom?” joke takes 200,000 living human neurons being electrically stimulated to kill things and makes it feel like a novelty. The joke is doing real work. It is making a boundary violation feel familiar before anyone has decided whether it should be permitted.

The Computing Story Is a Distraction

Here is what I think is actually happening, stated plainly:

Cortical Labs is not primarily a computing company. It is a company that has dissolved a category boundary, and the dissolution is proceeding without anyone noticing because the first public encounter was a joke.

The computing story is real but unimpressive at first glance:

  • 200,000 neurons performing worse than a reinforcement learning algorithm on a five-dollar Raspberry Pi is not a competitive technology.

  • The CL1 chip costs $35,000 per unit.

  • Cloud access runs $300 per week.

  • The company calls this “wetware-as-a-service.”

  • They have shipped 115 units.

  • They have raised $11.6 million, including from In-Q-Tel — the CIA-founded venture fund that serves the broader U.S. intelligence community.

  • They have published 23 peer-reviewed papers.

The engineering trajectory is more interesting than the current performance.

Their first demo — neurons playing the simple game Pong — used 800,000 neurons and took 18 months of internal effort.

Source: Neuron

The Doom demo used 200,000 neurons and took an external developer one week. That compression is not about the neurons getting smarter. It is about the interface maturing. The platform is becoming usable. And the history of technology tells us something specific about what happens when platforms become usable: things that seemed impossible start happening faster than the current state of the art predicts.

However.

The technology story — will biological computing ever rival silicon? — is not the story that matters. The story that matters is the one nobody in the tech press is telling.

It starts with how the public will actually process what it has seen.

The Line Nobody Can Draw

The 200,000 neurons on Cortical Labs' chip are competent at something. They navigate a 3D environment. They find enemies. They shoot.

Daniel Dennett called this “competence without comprehension” — systems that perform apparently purposeful behavior without understanding anything about what they are doing. The neurons are not conscious. Brett Kagan, the company’s Chief Scientific Officer, says so explicitly:

"What the cells are showing is not a marker of consciousness; it's simply what happens to a complex system in a structured information environment."

What they do show, he says, is “markers of structural organization, which one might call intelligence.” To put this in context, 200,000 neurons have roughly the computational complexity of a small insect ganglion.

But the gradient from 200,000 neurons to 86 billion — from insect ganglion to human brain — is continuous. There is no bright line. There is no threshold where “mere material” becomes “moral patient.” Every number anyone might pick is arbitrary. And the arbitrariness means governance will deadlock. Industry will resist any threshold as premature. Ethicists will demand one as precautionary. Policymakers will defer because any number they choose will be attacked from both sides. While the deadlock persists, the technology scales through the ungoverned space.

This is the pattern of every comparable boundary dispute. The 14-day rule for embryo research — established by the Warnock Committee in 1984 — was explicitly acknowledged as arbitrary. It held for forty years not because it was correct but because reopening the question was politically intolerable. The gradient admits no natural stopping point. The technology proceeds through the gap.

This dynamic — the Comprehension Gradient — is the governing mechanism for biological computing’s future. The governance question will not be resolved by science. It will be resolved, if at all, by an arbitrary threshold imposed after a crisis forces action (The history of embryo research, animal welfare law, and organ transplantation all suggest the same pattern).

The Ambiguity Is Not a Bug

This brings me to the second mechanism I think is operating here, and it is the one that makes me genuinely uneasy.

Cortical Labs inhabits a specific kind of moral ambiguity that is not accidental but structurally useful. Kagan says the neurons are not conscious — no moral obligation. The company’s marketing says “Artificial Actual Intelligence” and “Think beyond silicon” — emphasizing the living, biological nature that generates fascination and investment. The Doom demo is presented as a fun engineering milestone, not as 200,000 human neurons subjected to electrical reward and punishment signals in a game built around killing.

In-Q-Tel’s investment benefits from the same duality. If neurons are just a material, defense applications face no special ethical scrutiny; if neurons have potential moral status, the investment looks prescient for securing early access.

The ambiguity is not a bug to be resolved. It is a feature that serves both commercial and narrative purposes simultaneously. Actors who benefit from the technology use the “no consciousness” framework when they need to justify continued operation, and the “it’s alive” framework when they need to generate excitement.

This is not unique to Cortical Labs. The same dynamic operated in animal experimentation for a century — researchers simultaneously argued that animals were similar enough to humans to produce medically relevant results and different enough to lack moral claims against experimentation. The contradiction was maintained, not resolved, because resolution in either direction would have been costly.

Facebook simultaneously presented itself as essential social infrastructure when it wanted regulatory protection and as trivial entertainment when it wanted to avoid responsibility for teenage mental health.

The gig economy classified workers as independent contractors to avoid employment obligations and as core brand ambassadors when marketing.

I am calling this the Substrate Moral Hazard, and its defining characteristic is that the defense — “we genuinely don’t know if neurons have moral status” — is intellectually honest. It is not a lie. It is a real uncertainty. But the uncertainty itself becomes an exploitable resource because the longer it persists unresolved, the more infrastructure and capital accumulate around the technology, making future moral reckoning costlier and therefore less likely.

The testable prediction: Cortical Labs and its investors will actively resist resolving the consciousness question, not just passively ignore it. The “we don’t know” position will be maintained longer than the scientific evidence warrants, because resolution in either direction is commercially costly.

If the neurons are conscious, the business model is a moral catastrophe. But if the neurons are definitively not conscious, the “Artificial Actual Intelligence” marketing loses its magic.

The ambiguity is the product.

The Strongest Case for What They’re Doing Right

There is a steelman here, and I want to give it its full weight before I complicate it.

Cortical Labs has done something remarkable. They published their ethics paper before their technical paper — a sequence almost unheard of in biotech. Kagan has co-authored work with independent bioethicists. They have proposed a quantifiable framework for detecting agency — a three-level hierarchy of information processing that provides measurable criteria rather than philosophical hand-waving. Their 2026 paper distinguishes between systems that merely react, systems that have internal states with fixed rules, and systems that adaptively modify their own rules.

By their own framework, most current AI systems fail the test for genuine agency. Cortical Labs' biological neurons might pass it. They are, in other words, building the tools that could be used to constrain them. That is not nothing.

However.

Publishing ethics papers is not the same as submitting to independent oversight. Proposing a framework is not the same as being bound by it. The proactive engagement with ethics serves a dual function: it is genuinely responsible, and it inoculates the company against the charge that they have not thought about the problem. Both can be true simultaneously, and both are.

December 23, 1947

Now I want to talk about the transistor, because the technology story — while secondary to the governance story — still matters.

On December 16, 1947, Bardeen and Brattain successfully built and tested the first transistor. On December 23, they demonstrated it to Bell Labs leadership. Six months later, when the company held a public press conference, the New York Times buried the announcement in a short piece on page 46 under 'The News of Radio.'

The radio industry shrugged. Vacuum tubes were reliable, well-understood, and the foundation of a multi-billion-dollar industry. Why bet on a finicky crystal?

Thirteen years later, the transistor was commercially viable. Twenty-five years later, it dominated. The vacuum tube industry — the glass blowers, the filament winders, the circuit designers whose expertise was organized around a specific substrate — was gutted within a single working lifetime. The critical variable was the learning curve: every year, transistors got smaller, cheaper, more reliable. Tubes had hit their ceiling. Once the curves crossed, the outcome was inevitable.

Is biological computing on this trajectory?

The honest answer is that we cannot tell. Two data points — Pong to Doom, 18 months to one week — do not make a learning curve. The current performance gap between biological and silicon computing is astronomically wider than the gap between transistors and vacuum tubes was in 1947. Silicon AI has made more progress in the last six months than biological computing has made in its entire history. And the complementary innovations required — scalable neuron production, reliable cell survival, programming paradigms for biological substrates, regulatory frameworks for human tissue as a commercial product — represent a queue of bottlenecks that will take decades to clear.

But biological computing has one thing that quantum computing — the other obvious comparison — does not. Eighty-six billion neurons run human civilization. The existence proof is not a physics theorem. It is every human brain that has ever existed. The question is not whether neurons can compute. It is whether we can engineer them reliably outside a skull.

The Hearing Aid Moment

The transistor did not compete with vacuum tubes on the tube’s home turf. It found niche markets — hearing aids, military radios, pocket transistor sets — where its unique properties (small size, low power, durability) mattered more than its performance disadvantages. Niche revenue funded the research that eventually made transistors competitive everywhere.

Biological computing may be approaching its hearing aid moment, and the niche is not computing at all. It is drug screening. In April 2025, the FDA announced its intention to replace animal testing, beginning immediately with monoclonal antibodies and later shifting to “new approach methodologies” including organ-on-chip technology and organoids. The organ-on-chip market is projected to reach $2.2 billion by 2033.

Cortical Labs has already demonstrated — in a 2025 Communications Biology paper — that pharmaceutical compounds measurably alter neural performance on their platform. Anti-seizure medications improved goal-directed activity in hyperactive neural cultures. That is not a computing application. It is a drug screening application.

The convergence is specific: the FDA is actively seeking alternatives to animal testing for neurological drug candidates. Cortical Labs has a commercial platform that tests drug effects on living human neural tissue. If the commercial strategy pivots — and the published evidence suggests this is already underway — the business model shifts from “biological computer that competes with silicon” (a fight it will lose for decades) to “human neural tissue platform that replaces animal testing” (a fight where it has a structural advantage silicon cannot replicate). That is the transistor’s actual trajectory, and it is the one worth watching.

One important caveat on the energy narrative: the million-fold efficiency claim in Cortical Labs’ foundational literature is real at the neuron level — the human brain runs on 20 watts. However, the CL1 unit draws 850 to 1,000 watts total, because life support (heating, cooling, pumping, filtering) dwarfs the neurons’ energy consumption. The biology is efficient. The infrastructure is not.

Whose Neurons? Whose Consent?

The CL1's neurons are derived from human induced pluripotent stem cells, reprogrammed from adult donor cells — typically skin or blood samples. The donors gave broad consent under biobank protocols designed for a world where donated tissue went into freezers and was used in studies the donor would never encounter.

A 2022 paper in Bioethics argued directly that broad consent should not extend to brain organoid research. Donors surveyed in 2023 were enthusiastic but wanted ongoing engagement and the ability to withdraw consent — precisely what broad consent does not provide. And the creation of brain organoids from iPSC lines is, according to a 2025 review in Frontiers in Blockchain, "subject to hardly any regulation at all."

This is the Henrietta Lacks problem updated for wetware-as-a-service. Lacks’ case is the prototype for what happens when consent architecture meets commercial biology.

In 1951, Lacks was being treated for an aggressive cervical cancer at Johns Hopkins Hospital — one of the few institutions in that era that treated Black patients at all — when doctors took samples of her cancerous cells without her knowledge, as was standard practice at the time. While other samples died within days, Lacks’ cells doubled every 20 to 24 hours and kept dividing indefinitely. The resulting HeLa cell line became the workhorse of twentieth-century biomedicine, used to develop the polio vaccine, the HPV vaccine, chemotherapy protocols, and COVID-19 vaccines, with over 100,000 publications built on HeLa research.

These discoveries became enormously lucrative — while the Lacks family received no financial benefits and continued to live in poverty. Compensation came only after decades of legal pressure: in 2023, the family settled with Thermo Fisher Scientific, and in February 2026 reached a second settlement with Novartis, with further lawsuits still ongoing.

The case established the template for the consent gap: tissue collected under one set of assumptions, used in ways the donor never imagined, generating value that flows entirely away from the person whose body made it possible.

The iPSC pipeline is not much different — technically valid consent with arguably insufficient scope. Did the donors who gave blood samples say yes to "your neurons playing Doom on the internet"? Did they say yes to "your neurons being sold to the CIA's venture capital arm"? The consent architecture was built for one world and is being applied in another. Somewhere in a biobank, a donor has no idea that their cells are on a chip, learning to kill demons in a video game, while the internet laughs.

The Lacks analogy is imperfect. But its power has never come from legal precision. It comes from the feeling of violation when people discover their tissue was used in ways they never imagined. That feeling does not require a legal finding. It requires a headline.

The Kidney and the Governance Gap

This is where Joseph Murray’s kidney returns.

The kidney was a technology that worked. The governance was a catastrophe — thirty years of improvisation before brain death criteria, allocation algorithms, and informed consent protocols were built reactively from scandals.

Biological computing is entering the same gap. No regulatory framework covers the commercial use of living human neurons as computing substrates. And the “Can It Run Doom?” meme is domesticating the technology through humor like other key technologies from the past:

  • “Atomic tourism” and “Miss Atomic Energy” pageants domesticated nuclear testing in 1950s Las Vegas

  • The “fun social network” framing domesticated social media surveillance before anyone understood the scale of data collection.

When a technology with profound implications is first encountered through a trivializing frame, the trivial framing becomes the anchor. The meme becomes the lens, and the lens does not break.

I call this the Domestication Trap, and its prediction is specific: when Cortical Labs scales to a million neurons, the public frame will be “remember when 200,000 played Doom badly? Now they’re better!” — not “a million human neurons are being commercially instrumentalized.”

But the kidney had one advantage the neuron does not: it was immediately, viscerally recognizable as human. A dish of neurons playing a video game badly is not.

And the JHU data tells us what happens in that gap: the domestication trap and the substrate asymmetry reinforce each other. The more human-like the neurons seem, the more valuable people find them — but their concern for the neurons' wellbeing doesn't rise at the same rate. When you add humor to the mix… humor makes it harder to take something seriously. And the harder it is to take something seriously, the wider the gap grows between "this is useful" and "this might be wrong." The ratchet turns.

Three Signals, One Bright Line

So, what should you watch for?

  • Independent replication — does another lab confirm the core finding?

  • The next funding round — does capital bet on the learning curve or walk away?

  • Proactive governance — does any regulator act before a crisis forces them to?

First and most important: independent replication. Every claim Cortical Labs makes rests on research almost entirely from one group of authors. The critique published in Neuron did not challenge the experimental methods but called the interpretive framing — words like “sentience” and “intelligence” — unsupported. Tony Zador at Cold Spring Harbor Laboratory called the whole enterprise “a scientific dead-end.”

If another laboratory, with no affiliation to Cortical Labs, replicates the core finding that cultured neurons can learn adaptive behavior — whether on the CL1 platform or independently — biological computing transitions from one company’s claim to an emerging field. If no replication appears by the end of 2028, this is quantum computing with a better narrative.

Whether Cortical Labs is an outlier or the first mover in a field depends on whether others follow — and as of March 2026, they are following. FinalSpark (Switzerland) operates a Neuroplatform with 1,000+ organoids and 10+ university subscribers. The Biological Computing Co. (San Francisco) raised $25M in seed funding in February 2026 to build bio-neural adapters for existing AI models. UCSC demonstrated goal-directed organoid learning. MetaBOC (China) published an open-source brain-on-chip. Indiana University’s Brainoware demonstrated reservoir computing in Nature Electronics. At least 15-20 active labs globally are working on some form of organoid intelligence. The question has shifted from “will anyone else enter?” to “how fast does the field consolidate, and does Cortical Labs’ first-mover advantage hold against better-funded competitors (TBC) and open-source alternatives (MetaBOC)?”

Second: the next funding round. The current valuation of $50-70 million on $11 million raised reflects what I think of as the existence proof premium — investors paying not for current performance but for the elimination of the “is it possible?” question. If the next round comes in above $100 million with no commercial application, capital markets are betting on the learning curve. If the round fails or comes at a lower valuation, the market has moved on.

Third — and this is the one I think matters most for the long run: whether any bioethics body, institutional review board, or regulator initiates a formal review of human neuron use in commercial computing before a crisis forces them to. If they do, the Substrate Moral Hazard is weaker than I think it is. If they do not — if the ambiguity is maintained as long as it is commercially useful — the organ transplant trajectory is our best guide, and the governance will likely arrive thirty years late and built from scandals.

The Kidney and the Nobel Prize

Joseph Murray won the Nobel Prize in 1990 for that kidney transplant. The ethical framework that eventually governed organ transplantation is one of medicine’s genuine achievements: transparent allocation, informed consent, brain death criteria, the works. It took thirty years and an uncountable human cost to build.

The neurons on Cortical Labs’ chip do not know they are playing Doom. The assessment that they do not know anything is reasonable and probably correct. But “probably correct” and “certain” are different things, and on the gradient between a single neuron and a human brain, nobody can tell you where the confidence should change. Survey research on public attitudes toward brain organoids consistently finds a divided public, with a significant portion viewing them as retaining something human, while another portion treats them as research tools — and the largest group remains genuinely uncertain. That uncertain middle will follow whichever narrative reaches them first.

The Doom meme reached them first. Seven million views and counting.

The technology may not wait for the governance. In fields where the moral category boundaries are contested and the benefits are visible, it rarely has. The drug screening pivot may arrive first — quietly establishing biological computing’s commercial foothold in a domain where living human neural tissue does something silicon genuinely cannot. That would be the transistor’s hearing aid. And just as the hearing aid did not stay a hearing aid, the drug screening platform will not stay a drug screening platform.

Murray’s kidney worked. The arguments came after. Cortical Labs’ neurons work — modestly, crudely, but demonstrably. The arguments have not even begun.

Meanwhile, the neurons die, respawn, and keep playing.


PAID MEMBERS: MINI-COURSE


This appendix is for curious readers who want to go deeper than the article and actually learn the concepts behind the analysis.

Think of it as a course. It teaches:

  • The economic and structural forces at work. The “physics” of what makes this event behave the way it does.

  • The historical story. Shows when this has happened before and what happened to the people in it.

  • The psychological and social mechanisms. The mental models that explain why humans respond to these forces the way they do.

  • The paradigm literacy. Why smart, informed people analyzing the same event reach completely different conclusions, and what that reveals about whose values are shaping the consensus.

Read them sequentially, and you’ll have a working toolkit for analyzing the next AI event on your own.

Or, if you want to go even deeper, copy and paste this entire article and have a conversation about it with AI.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Michael Simmons · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture