A Future of Computing: CL1 and the Rise of Synthetic Biological Intelligence

Previously, I mentioned in a post about Quantum Computing and its implication on AI. One other medium science is look to for creating better, faster, more energy efficient machines for AI is neurons; human neurons. For geeks of science fiction, the concept of biological computers inevitably calls to mind the Cylons from Battlestar Galactica (a series I often reference here in my blog writing). In the series, Cylons are synthetic beings originally created by humans, but they evolve into entities that blend machine precision with biological complexity, even achieving self-awareness and emotion. The parallels are striking: today’s wetware, like the Cortical Labs CL1, blurs the line between the organic and the artificial, raising questions about intelligence and even the ethical boundaries of AI processing technology.

As science ventures further into this hybrid frontier, the lessons, or should I say warnings, of the sci-fi story of Battlestar Galactica (and shortly lived series Caprica for the uber nerd) feel more and more relevant with each new advancement or augmentation in AI I read about.

Wetware

I’m not going to lie, human neurons merged onto a chip, even with today’s technology sounds both like complete sci-fi…and yet very plausible. Why slow down a computer in having to learn how to mimic the brain with code, when it could just leverage living human neurons to process information. The CL1, the world’s first commercial biological computer, developed by Australia’s Cortical Labs, has begun to bring that to market. This device merges networks of real brain cells with silicon chips, and represents another new substrate for computing to run on. One that scientists and technologists have several names for, but I tend to lean towards “Synthetic Biological Intelligence” or “Wetware” (which sounds like a term that could have come out of the Altered Carbon book series)

Unlike traditional artificial neural networks, which run on silicon and require vast amounts of energy and data to learn, the CL1 leverages the adaptability, efficiency, and non linear processing of actual neurons. These living cells are cultivated from stem cells, and grown onto a substrate and interlaced with chips. Once combined, they can be stimulated, trained, and even “rewarded” for successful tasks, such as playing the classic game Pong. And potentailly, break the atomic limit we will hit as mentioned through Moore’s Law with integrated circuits.

Wetware allegedly will not only a dramatic reduction in energy consumption compared to today’s AI systems, but potentially also leap in learning speed and adaptability. The CL1’s architecture hints at a future where biological and digital systems work in tandem, blurring the line in my mind (no pun intended) between machine and organism.


Of course, this technology is still in its infancy. Current wetware computers are limited in scale and complexity, and their ethical implications…let alone a $35,000 price tag to start with a CL1. But as researchers push the boundaries of what’s possible, it’s trajectory echoes the questions posed throughout Battlestar Galactica: if we create machines with living intelligence, how long before the line between creator and creation, human and Cylon, begins to blur?

And, I argue, if this unlocks a new untapped potential in faster and more efficient computing, to what end? To what end does having all this ability to process all this data benefit us? Of course, the obvious areas are in science and healthcare, but what about the average consumer? Do we really need this? Having the potential of our devices to literally be our second brain for us?