Meanwhile, within the corpse the fluids heat, the soft bones tepefy,
And creatures fashioned wonderfully appear;
First void of limbs, but soon awhir with wings,
They swarm…

— Virgil, Georgics

Introduction

Contemporary technology increasingly operates through the mutual contamination of natural and artificial realities. This contamination is both epistemological—manifested in the exchange of models, methods, and metaphors across biology and engineering—and ontological, resulting in hybrid entities that materially interweave, more and more deeply, the bodies of organisms and those of machines. From these entanglements arise two urgent concerns: first, how to navigate the ever more ambiguous machine/organism binary, and second, how to understand the role of substrates (that is, the material embodiment of our artifacts) and their impact on function in this emerging generation of hybrid technologies. Computer hardware is actively blending with cultures of living neurons, installing a material interface of unprecedented intimacy between the two substrates. In a time like this, old hierarchies between nature and technology no longer hold. Simultaneously, substrates—long treated as passive supports of universal functions—are beginning to emerge as powerful agents capable of shaping our technological future. For this reason, any structural misapprehension of the agency of substrates threatens to keep technologies bound to either an essentialized understanding of nature or a disembodied technical archetype.

A machine ontology capable of truly overcoming the binary of artifice and nature (without collapsing it into a metaphysical privilege of one over the other) should be founded on a recognition that substrates hold intrinsic agency and power. [1] To articulate this argument, I follow the trajectories of two hybrid technocultural beings, one very ancient and the other very new: the elusive homunculus of Western alchemical traditions and the neural computers of contemporary synthetic biology. These entities, one the offspring of premodern magic and the other a product of modern scientific practices, share the ambition to engage with substrates as active cocreators of technical reality. [2]

The word “substrate” generally refers to the material grounds supporting a specific technical function. Already in its etymology, the word substrate (sub stratum, that which lies below) conveys an expectation of subservience to a higher power, a condition of passivity largely reflected in conventional computer architectures and the epistemologies they sustain. Substrate and function—or, in the context of computing, hardware and software—are typically understood as conceptually and materially distinct. Ensuing from this design philosophy is the view that different substrates (for example, silicon and neurons) can perform the same universal operations with varying degrees of efficiency or speed. Critics of this approach often appeal to the embodied differences between biological and artificial substrates to argue for the intrinsic privilege of natural intelligence over AI, implicitly granting natural substrates a kind of metaphysical power simply by virtue of their origin. Yet the opposite stance, which treats substrates as interchangeable carriers of universal functions, is equally reductive. Both positions overlook the technical agency of substrates themselves, that is, their capacity to reshape function and, in doing so, expose any hierarchy between the natural and the artificial as arbitrary. Clearly, then, the demarcation between natural and artificial cannot easily be disentangled from that between substrate and function: any attempt to question one will inevitably lead to a reconsideration of the other. What follows shows that breaking this double bind means releasing substrates from metaphysical passivity and recognizing them as unruly companions of invention.

The Homunculus Problem

The obscure tenth-century Arabic text Liber vaccae (The book of the cow) contains one of the first detailed accounts of the magical procedure for creating a synthetic human. [3] According to the book, the artificial generation of a “rational animal” requires the deposition of human sperm into the womb of a cow, which must be sealed inside a windowless building and fed on a diet of animal blood until it gives birth to an entity entirely devoid of skin. The newborn creature must then rest in the dark, submerged in a powder of sunstone, sulfur, magnetite, and zinc oxide to allow its human skin to grow. Enclosed in a glass vessel and locked away from sunlight, the being begins moving and feeding on its mother’s milk and blood. The book claims that after a year has passed, the entity, now entirely human-like in appearance and intelligence, will begin uttering secrets to its maker, revealing “all things that are absent.” The creation of such artificial beings, which came to be known as homunculi in the European Middle Ages, was one of the most ambitious goals of Western alchemy, alongside the transmutation of metals into gold. The most widely cited recipe for creating a homunculus is recorded in a sixteenth-century occult text titled De Natura Rerum (On the Nature of Things), commonly attributed to the German occultist Paracelsus, although this is still debated. According to this version, human sperm must be sealed in a glass vessel and incubated in horse dung until it acquires a human-like appearance. Fed on milk and blood, this being grows into an all-knowing creature capable of revealing hidden truths.

More than a systematic doctrine, Western alchemy was an eclectic set of theories and practices revolving around the manipulation of inorganic and organic matter. Despite being closer to magic than to modern science, the philosophical debates surrounding alchemical practices began to raise profound questions on the relationship between the natural and the artificial, questions which remain increasingly relevant in the face of today’s emerging technologies. In his book Promethean Ambitions, historian William R. Newman (2005) frames Western alchemical practices as attempts to materially replicate, not merely imitate, the intimate workings of nature. Alchemy’s ambition of transmuting metals into gold rested on the assumption that geological processes occurring naturally within the womb of the earth over enormous spans of time could be reproduced, accelerated, and perfected in the alchemical laboratory. Beginning in late antiquity, alchemists’ accounts point to the search for a radical continuity between natural and technical realities and a deep questioning of the metaphysical prominence of nature over artificial processes. Alchemy, Newman argues, proposed a paradigm of technics that diverged profoundly from that of the “mechanical tradition,” which included the fabrication of early automata. Whereas mechanical technology aimed “to conquer nature by imitating it with contrary materials,” alchemy “occupied a privileged rank among its believers in its claim to alter the deep structure of matter in a way that was purely natural” (Newman 2005, 23). The ontological continuity between natural and artificial to which alchemists aspired, in other words, rested on the possibility of understanding and harnessing the same material processes unfolding spontaneously in the natural world.

Its antiquity notwithstanding, the homunculus problem was already a problem of artificial intelligence. Although the spontaneous generation of animals from decaying organic matter was, since Aristotle, widely accepted as uncontroversial, replicating the same process with human life was more complicated. This was due to the necessity of generating a “rational soul” alongside an artificial human body—or, in modern terms, its consciousness. As such, homunculi were not just naive proto-scientific experiments but speculative devices testing the limits of human technology. Those who claimed to have successfully produced them argued that even the most mysterious natural qualities, such as intelligence, could be recreated, accelerated, and perfected artificially. This goal, however, could not be achieved through mere mechanical mimicry but required working alongside the material complexity of natural processes.

By considering alchemists’ fantastical accounts without quickly disregarding them as bizarre products of unenlightened times, we can begin to see that many of the questions springing from alchemical lore are still profoundly relevant to today’s technological landscape. Being both ensouled and enfleshed, natural and artificial, the homunculus asks: Can cognitive processes be replicated entirely artificially, or is technology bound to reiterate a faulty simulation of nature? And what role does matter play in the artificial generation of intelligent beings? These ancient questions, which still haunt today’s debates on AI and synthetic biology, lead us to the heart of the intersection between materiality, intelligence, and technology, where machines come to terms with the question of the substrate.

Brains in a Dish

The ancient fantasy of cultivating human bodies in glass vessels is eerily reminiscent of the practices of contemporary biotechnology, from in-vitro gametogenesis to genetic engineering and tissue culture. If we were to draw a technocultural lineage connecting Western alchemy to today’s technologies, alchemy’s insistence on the exact material replication of natural processes would make it a likely ancestor to today’s synthetic biology. Specifically, we might consider homunculi as the first instance, albeit an entirely speculative one, of the technological intersection of synthetic biology and artificial cognition, which has grown into an increasingly significant field over the last five years. Today, over a thousand years after the earliest homunculus recipe was recorded, researchers are engineering the first computer processors that use networks of human neurons as their substrate.

The idea of connecting neurons and computers, constructing “neural interfaces,” originates in the development of electroencephalography (EEG) in the 1930s, when electric signals resulting from brain activity were first measured. Physician Hans Berger measured these signals in a patient whose cranial injury had left part of his brain exposed, observing how electric potential correlated with cognitive activity. In the 1970s, scientists began speculating that electric signals from the brain might be used to communicate with computational systems and thus started experimenting with brain implants in animal models. [4] Today, neural interfaces are evolving in two complementary directions, one involving an “internalization of the computer” through more or less invasive in-vivo implants (“brain–computer interfaces,” or BCIs, such as the controversial Neuralink), and another involving an “externalization of the brain” through in-vitro cell cultures. [5] This essay will focus on the latter, in-vitro type of neural interface.

In 1997, a group of researchers from Caltech announced the construction of the first in-vitro neural interface, or “neurochip” (Maher et al. 1999). The development of this ambitious project, however, was stifled by the limitations of the biotechnological knowledge of the time. The construction of this device involved harvesting neurons from rat embryos and inserting them manually, one by one, into micromachined silicon wells, where they could survive for no more than two weeks. For a long time, in-vitro cell viability was limited by multiple factors, primarily the lack of stable techniques for growing specialized cells in artificial environments. While immortalized human cell lines like HeLa had been available since the 1950s, these were proliferative cancer cells, not neurons. Functional neurons, by contrast, are postmitotic: once harvested from embryonic or neonatal tissue, they do not divide and survive only briefly outside the body. These scale and time restrictions, which prevented Caltech’s “neurochip” from developing into any functional network, could be overcome only with the acceleration of advanced biotechnological techniques in the twenty-first century.

It was not until 2006, with the development of induced pluripotent stem cells or iPSCs, that researchers gained access to a scalable and renewable source of human neurons that could be grown in vitro, easily replenished, and maintained over long periods. iPSCs are obtained by reverting adult cells to their predifferentiated, “pluripotent” state through a genetically engineered virus. After being treated with this “molecular time machine,” iPSCs can be differentiated into virtually any cell type, including cortical neurons. [6] While before 2006 human brain cells could be harvested only by dissecting human embryos, iPSCs technically allow any adult human to cultivate neurons in a petri dish that are genetically identical to those in their brain. The development of other techniques alongside iPSCs further contributed to the increasing success of in-vitro neural cultures. These techniques include improved surface chemistry protocols, the introduction of microfluidic systems, and controlled incubation environments. In addition to cell cultures themselves, the electronic component of neural interfaces has dramatically evolved over the last two decades. Whereas the 1997 “neurochip” possessed only sixteen electrodes, today’s high-density multielectrode arrays, or HD-MEAs, pack tens of thousands of electrodes in just a few square millimeters, enabling real-time, spatially resolved, bidirectional interaction between biological and electronic substrates. By introducing unprecedented control of cell cultures and increasingly sophisticated electronic interfaces, these advances opened the door to forms of biological computation that were previously inconceivable.

As soon as new developments in iPSC technology, cell culture protocols, and HD-MEAs allowed it, the winter of biological computing quickly began to thaw. In 2022, the first specimen from a new generation of “neurochips” was announced in a paper in neurobiology journal Neuron (Kagan et al. 2022). The paper, “In Vitro Neurons Learn and Exhibit Sentience When Embodied in a Simulated Game-World,” reported the construction and training of a biological computer obtained by cultivating a single layer of live human and rat cortical neurons on a multielectrode array. The device, named DishBrain by its creators from startup Cortical Labs, can both read electrical signals from the neurons in real time and deliver closed-loop electrical stimulation patterns in response. DishBrain is the first instance of what the team, led by neuroscientist Brett Kagan, calls synthetic biological intelligence (SBI), where live neurons are not only materially interfaced with electronics but also learn to act intelligently within their simulated environment. To demonstrate DishBrain’s capacity for active learning, the neural network was trained to play the 1972 Atari video game Pong. This was achieved by delivering, through the device’s electrodes, noninvasive “sensory” signals that conveyed information on the position of the paddle and the ball. Simultaneously, the neurons’ electrophysiological activity was recorded and utilized to control the in-game paddle. This created a closed-loop feedback system where neural activity could influence the neurons’ stimulation in real time.

In a biological computer of this kind, countless parameters shape the system’s behavior, with some related to the materiality of the substrate and others to software and training. [7] SBI can differ widely depending on the kind of biological substrate, the systems employed to communicate information to the in-vitro neurons, the choice made regarding open- or closed-loop stimulation, and the algorithms utilized to train the network. Neural cultures, in turn, can vary according to their morphology and source (either primary cultures, that is, harvested from embryos, or cultures derived from iPSCs). While DishBrain is a two-dimensional neural network composed of a single layer of cultured cells, other devices utilize three-dimensional cultures commonly known as cerebral organoids. While organoids are much more complex than monolayers, making them more similar to actual brains, they are also much more difficult to interface with two-dimensional electrodes. [8]

In SBI, software and training algorithms are also deeply shaped by the physiology of substrates, meaning that the kind of feedback provided during training must be biologically meaningful. Besides electric stimulation, optogenetic signals or the controlled release of neurotransmitters (such as dopamine) have also been explored. In the specific case of DishBrain, the neural network’s training protocol resulted from the application of a neurocomputational theory known as the “free-energy principle,” first proposed by neuroscientist Karl Friston. [9] According to this theory, intelligent behavior emerges from a drive toward entropy minimization, meaning that a neural system has a natural tendency to reduce its exposure to unpredictable stimuli by either controlling its environment or adapting its internal predictive models to better prepare for future events. Leveraging this theory, the electrodes delivered predictable electrical stimulation patterns to the neurons when the paddle intercepted the ball while administering chaotic, unpredictable sensory patterns when the neurons failed to play the game successfully. This training process significantly improved gameplay capabilities within just a few minutes of training, a much better result than that of a system where no feedback was provided to the neurons.

From an engineering standpoint, biological computing differs significantly from digital systems: it may dramatically reduce energy consumption and training times, offering a sustainable alternative for specific applications. What makes SBI philosophically compelling, however, far exceeds these metrics. Unlike conventional, silicon-based AI, where neural connectivity is modeled digitally, SBI directly integrates living matter. Its plastic, embodied organization does not mimic cerebral cognitive processes but materially enacts them, albeit on a smaller scale. Neural interfaces challenge the rigid separation of substrate and function that has long shaped our machine ontology: they reveal that specific functions arise from the particular powers of each substrate, irrespective of whether it belongs to a natural or an artificial body.

Technomorphism and Biomimesis

Biological computers and homunculi share a condition of double ambiguity: they confound the boundary between natural and artificial, and they complicate the demarcation between substrate and function. As their cognitive capacities arise directly from the material conditions of their embodiment, they both frame substrates not as passive carriers of function but as decisive agents in their own right, even capable of overcoming the constraints of their own nature. The ambiguous condition of these entities points to a deeper question: how is our understanding of the relation between organisms and machines shaped by our assumptions about substrates? In the domain of AI, this question becomes especially significant. To frame machine intelligence as either a flawed imitation of its natural model or the expression of a universal process abstracted from embodiment leaves little room for the power of substrates to emerge. Set against silicon chips, the hybrid materiality of biological computers throws the substrate/function binary into a profound crisis. To grasp the scope of this crisis, we first need to excavate the dominant paradigms through which the relation between organisms and machines has been conceptualized historically, paying particular attention to the place substrates have been assigned within them.

One productive way to approach the entanglement of machines and organisms is by investigating the use of metaphors across engineering and biology. Machine metaphors circulate widely in contemporary technoscience and are commonly understood as purely descriptive statements, for example, “the brain is a computer,” “DNA is software,” or “the organism is a machine.” As simple as they may appear, these statements implicitly express, through their use of “being,” a wide spectrum of possible relations, ranging from loose functional analogy (“organisms and machines share similar functions”) to strict ontological equivalence (“organisms and machines are unequivocally the same thing”). The ambiguity of such metaphors has been the subject of intense debate in the philosophy of science, where their divergence from transparent representations of biological and technical realities is often emphasized. [10] What relation is truly expressed by statements like “the brain is a computer,” and what role must substrates play for such statements to hold? Although machine metaphors seem to suggest proximity between natural and artificial beings, they often entail an implicit hierarchy in which either the machine or the organism serves as the archetype for the other. As we will see, this hierarchy affects not only the ontological status of artifacts but also the relationship between technologies and their substrates.

In 1747, Julien Offray de La Mettrie published L’homme machine (Man—Machine), a provocative essay in which he proposes that all physical and mental faculties of human beings can be explained as the product of mechanistic processes. [11] In his provocation, he followed René Descartes’s conception of the bête-machine, which argues that animal bodies can be described through the same mechanistic principles governing inanimate matter. While Descartes limited the scope of his mechanistic conception to explaining physiological processes, thus preserving the spiritual nature of human rationality, La Mettrie saw no distinction between mental and biological phenomena, believing both to be ruled by the same mechanical principles. [12] Both Descartes’s and later La Mettrie’s mechanistic understandings of biology were undoubtedly influenced by the mechanical technologies of their time: L’homme machine abounds with references to springs, clocks, and perpetual motion, while Descartes was known to cultivate a profound personal fascination for automata. [13] Reading La Mettrie’s work today is instructive on the historical situatedness of technological metaphors in biology, as it provides a clear example of how specific technocultural paradigms often shape broader philosophical understandings of life and intelligence. Although strictly bound to its historical context, La Mettrie’s vision of the man–machine also serves as a paradigmatic example of a broader, still widespread, epistemic stance that I define as technomorphism.

Technomorphism approaches the machine/organism binary from a peculiar angle. Instead of understanding technology as shaped based on natural processes (as per the traditional Aristotelian dictum, “art imitates nature”), technical reality is raised to the status of a universal archetype capable of encompassing both natural and artificial beings. From a technomorphic standpoint, the statement “the organism is a machine” implies that the category of “machine” is broad and powerful enough to serve as an archetype for the organism. Importantly, another implication immediately follows: that the category of “machine,” insofar as it encompasses both nuts and bolts and blood and flesh, points to a reality conceived as independent of embodiment. This “substrate indifference,” which erases any role for substrates in shaping natural or artificial realities, is a common trait of technomorphic statements. Consider, as an example, the following passages from La Mettrie’s work:

"Let’s look in more detail at these springs of the human machine. All the body’s movements—vital, animal, natural, and automatic are carried out by them. Aren’t all these mechanical? . . . So the soul is only a principle of motion, a tangible material part of the brain that we can safely consider as a mainspring of the whole machine, which visibly influences all the other springs and seems indeed to have been made first; in which case all the others are a mere by-product of it. . . . I am not mistaken; the human body is a clock, a huge and complex and finely designed clock. (La Mettrie [1748] 2017, 23–29)"

Evidently, in these passages, the author is not implying that the human body contains literal cogs and springs. The technomorphic claim is more subtle, but also more radical: the knowledge and models derived from technical reality can subsume both machines and organisms under the same set of principles, so that “all the old problems about matter, life, and mind have been definitively solved” (La Mettrie 2017, 22). Significantly, in this context, words like “machine” and “clock” stop pointing to specific technical artifacts, becoming instead the placeholders for a greater, metaphysical idea of which individual machines and organisms are but particular instances. This operation installs a new hierarchy, in which the artificial—in the general sense discussed above—is elevated as the most archetypal form of the natural. Bound up in this hierarchy, as both its necessary premise and its inevitable consequence, is the powerlessness of substrates, whose materiality emerges as a purely indifferent support to the coming into being of a universal function.

Technomorphism has always appealed to universality, but the technical realities that serve as its models are in constant flux. La Mettrie’s statements on the human body’s equivalence to mechanical clocks may appear naive from the vantage point of contemporary science. However, although machine metaphors have shifted in content, the technomorphic stance has remained widespread. Over the second half of the twentieth century, the rise of cybernetics, information theory, and computer science on the one hand, and molecular biology and neuroscience on the other, has produced fertile grounds for an unprecedented proliferation of new machine metaphors in biology. Beginning in the 1950s, by bringing the newly discovered “genetic code” to the forefront of its research program, molecular biology once again proposed a view of the living organism as a machine—but it was a different machine from the one La Mettrie envisioned.

In her book Refiguring Life: Metaphors of Twentieth-Century Biology, Evelyn Fox Keller (1995) presents a detailed historical account of the emergence of new machine metaphors in the twentieth century, focusing specifically on the significance of the concept of information as a bridge between cybernetics and molecular biology. According to Keller, the introduction of this concept in biology, proposed by Watson and Crick themselves after they had confirmed the structure of DNA and identified it as the genetic material, profoundly transformed our understanding of the organism. The notion of “genetical information” supported the idea of DNA as the primary agent in biology, a molecule uniquely capable of imbuing passive biological substrates with activity and structure—giving rise to, as Keller (1995, 3) calls it, “the discourse of gene action.” The discourse of gene action, which relegated substrates to the background, was foundational to the spread of technomorphic stances in the cybernetic age. Information theory first introduced computational terms such as “message,” “program,” and “instruction” into biological discourse. However , as Keller insists, these concepts cannot be plainly transplanted from one field to another without introducing significant ambiguities. “If ‘genetical information’ is to have anything to do with life, it must involve meaning,” she observes. “A point mutation in the genetic code, a change in a single base pair of DNA, would make no difference to Shannon’s measure of information, but for an organism, it would almost certainly make just the difference that matters—the difference between life and death” (Keller 1995, 94). The import of metaphors from information science into biology reinforced the substrate indifference already present in the discourse of gene action, obscuring the embodied, substrate-specific reality of organisms. As Keller insists, information acquires biological meaning only when embedded in the living body.

From the 1970s onward, computational metaphors in molecular biology began circulating widely, becoming so generally accepted that they made their way, largely unquestioned, into biology textbooks. Over time, several scholars have, knowingly or unknowingly, echoed La Mettrie’s confident technomorphic statements (“I am not mistaken; the human body is a clock, a huge and complex and finely designed clock”), revisiting them in computational terms. In doing so, they sustained a vision in which material substrates, whether artificial or natural, are framed as incidental to the technical functions they support. What do we really mean when we state that cells are computers, or that bodies are clocks? At their most literal, technomorphic statements enact a metaphysical flattening, where machines and organisms are related by strict ontological equivalence and substrates appear as marginal attributes of a higher, transcendent reality.

Among the most intriguing examples of technomorphic thinking in the information age are the writings of Efim Liberman, who, beginning in the mid-1970s, developed a theory of the cell as a “molecular computer” (Liberman 1979). Liberman, writing from the standpoint of Soviet cybernetics, argued that the cell (and, by extension, the brain) should be understood as a stochastic, parallel-successive molecular computer operating with “molecule-words” (DNA, RNA, proteins) and “molecular devices” (ligases, polymerases, ribosomes), whose interactions are governed by chemical programs inscribed in genetic sequences. In this stochastic computer, Brownian motion acts as a sort of “search engine,” binding complementary molecular addresses in a massively parallel computing substrate. For Liberman, the product of such massive computational effort within the cell is, essentially, the prediction of the future. “The molecular computer operates with word-molecules according to the programme, recorded in DNA,” he writes, “with the aim of predicting an outer situation in the next time-moment and selecting of a correct answer by synthesis of suitable proteins and other substances and also by macroscopic motion” (Liberman 1979, 111). While noting at multiple points the peculiarity of living cells vis-à-vis ordinary computational machines, Liberman offers a rigorous demonstration that “molecular cell computers” (which include both microscopic cells and macroscopic brains) are “universal computers,” entirely equivalent to any other universal computational machine irrespective of its embodiment or architecture. [14]

The notion of a “universal computer” serves as a powerful lever for several technomorphic positions whose content, while differing slightly in each specific elaboration, has remained substantially unchanged since the 1970s. Most recently, Blaise Agüera y Arcas (2024) has revived Liberman’s visionary ideas in his essay What Is Intelligence? He similarly proposes a reading of microbiology as fundamentally computational, evoking Alan Turing’s groundbreaking vision of a read–write machine that, given enough time and enough tape, could theoretically simulate any other computational machine. Demonstrating the possibility of such a computer, known today as a universal Turing machine, leads to the powerful conclusion “that computability is a universal concept, regardless of how it’s done” (Agüera y Arcas 2024). Agüera y Arcas leverages this universality to propose that living organisms are but one instance of a broader archetype, namely computation, which may occur indifferently across substrates and architectures. Much like Liberman, he also recognizes profound differences between cells (which he too defines as massively parallel and stochastic machines) and human-made computers, specifically focusing his comparison on Von Neumann architectures. [15] He also concludes that these differences, while making certain kinds of computational processes more efficient than others, are ontologically inconsequential, since “any computer can emulate any other one” (Agüera y Arcas 2024). Here, the relationship between machine and organism is not merely functional or discursive and far exceeds the boundaries of metaphor, as the author notes when he states that “it’s not a metaphor to call DNA a ‘program’—that is literally the case” (Agüera y Arcas 2024). In this radical technomorphism, natural processes become intelligible only insofar as they can be subsumed under a machine archetype whose validity transcends substrates, space, and time.

An in-depth review of the history of machine metaphors in biology lies beyond the scope of this paper. Targeting technomorphic statements as “reductionist” is also not my intention: such a critique would demand much deeper engagement with an extensive and ongoing debate in the philosophy of science. [16] Ultimately, technomorphic positions are legitimate attempts to break down the rigid separation between the natural and the artificial, which has historically cast machines as imperfect imitations of natural models. Faced with increasingly lifelike technologies, the importance of this challenge is unquestionable, and technomorphism has convincingly addressed it. The problem emerges in how technomorphic stances pursue this continuity, achieved at the cost of a structural erasure of the agency of substrates and disregarding their capacity to shape not only how functions are performed but also what kinds of functions are possible in the first place. If biological flesh and hard silicon are nothing but indifferent media supporting the same universal technical function, then constructing devices that hybridize living and nonliving substrates would ultimately make no difference to the ontology of our machines.

At stake in this debate, then, is the failure of our dominant paradigms to account for the increasing hybridization of machines and organisms on the one hand and the agency of substrates on the other. While technomorphism pursues continuity through substrate indifference, those who uphold the opposite hierarchy preserve discontinuity by essentializing the natural. Evoking once again the Aristotelian notion that “art imitates nature,” we may define their stance as biomimesis: the assumption that nature may never be fully replicated artificially and that technical reality is but a faulty imitation of natural processes whose true essence lies beyond technology’s grasp. Biomimesis remains a profoundly influential stance, especially in debates surrounding the possibility of artificial consciousness, where it often goes hand in hand with the notion of intelligence as “substrate-bound.” This is the idea that mental functions are a specific product of the brain and the brain alone, implying that no other substrate can produce comparable effects. Perhaps the most exemplary of such positions has been articulated by John Searle (1990, 32), who emphasizes the distinction between true intelligence and its artificial mimicry:

At stake in this debate, then, is the failure of our dominant paradigms to account for the increasing hybridization of machines and organisms on the one hand and the agency of substrates on the other. While technomorphism pursues continuity through substrate indifference, those who uphold the opposite hierarchy preserve discontinuity by essentializing the natural. Evoking once again the Aristotelian notion that “art imitates nature,” we may define their stance as biomimesis: the assumption that nature may never be fully replicated artificially and that technical reality is but a faulty imitation of natural processes whose true essence lies beyond technology’s grasp. Biomimesis remains a profoundly influential stance, especially in debates surrounding the possibility of artificial consciousness, where it often goes hand in hand with the notion of intelligence as “substrate-bound.” This is the idea that mental functions are a specific product of the brain and the brain alone, implying that no other substrate can produce comparable effects. Perhaps the most exemplary of such positions has been articulated by John Searle (1990, 32), who emphasizes the distinction between true intelligence and its artificial mimicry:

"Brains are specific biological organs, and their specific biochemical properties enable them to cause consciousness and other sorts of mental phenomena. Computer simulations of brain processes provide models of the formal aspects of these processes. But the simulation should not be confused with duplication. The computational model of mental processes is no more real than the computational model of any other natural phenomenon. One can imagine a computer simulation of the action of peptides in the hypothalamus that is accurate down to the last synapse. But equally one can imagine a computer simulation of the oxidation of hydrocarbons in a car engine or the action of digestive processes in a stomach when it is digesting pizza. And the simulation is no more the real thing in the case of the brain than it is in the case of the car or the stomach. Barring miracles, you could not run your car by doing a computer simulation of the oxidation of gasoline, and you could not digest pizza by running the program that simulates such digestion. It seems obvious that a simulation of cognition will similarly not produce the effects of the neurobiology of cognition."

Such positions remain anchored to a fixed idea of nature, life, and intelligence, preventing us from imagining any possible contamination between technical and biological realities. They are incapable of accounting for the ontology of an increasing number of hybrid beings, such as biological computers, in which organisms and machines exist in growing proximity. The technomorphic solution to this problem, however, affirms a new hierarchy as it dissolves the old. It is now no longer an essentialized concept of nature but one of technology—be it in the form of a seventeenth-century automaton, a nineteenth-century telegraph, or a twentieth-century computer—to which embodied realities of both organisms and artifacts are made to conform. Each in its own way, both biomimesis and technomorphism erase substrate agency: the first by declaring the natural substrate fixed and unchanging, the second by rendering all substrates interchangeable. Neither of these inherited paradigms can address the messy entanglements of machines and organisms, substrates and functions that arise from contemporary technology.

Material Interfaces

The boundary between machine and organism is complex: both discursively, as demonstrated by the controversial history of machine metaphors, and materially, as the construction of biohybrid technologies, such as DishBrain, clearly illustrates. Two paths recur frequently in attempts to navigate this debate. The first, biomimesis, affirms that machines imperfectly imitate natural functions such as life and intelligence. The second, technomorphism, identifies technical reality as an overarching archetype for nature. By positing a hierarchy between the natural and the artificial, neither is equipped to deal with a hybrid reality in which the technical and the natural are not each other’s mirror but materially and discursively contaminate one another. Both technomorphic statements (“the cell is a computer”) and biomimetic statements (“computers are simulations of the brain”) ultimately prove inadequate in capturing the power of substrates to bring forth specific behaviors and functions: while technomorphism pushes a narrative of radical substrate indifference, biomimesis keeps substrates bound to their predetermined, natural expectations. In the face of this boundary, the power of substrates is, instead, one of unbinding: loosening matter from its subordination to function and undoing the divide between artifice and nature. The very possibility of inhabiting this interface, without resolving it into a superiority of either the natural or the technical, depends on a recognition of matter as an agent rather than a passive stage for the realization of abstract functions.

Consider “the homunculus problem”: the question of whether it is possible to replicate natural functions through technical means. Instead of pursuing either a natural or a technical archetype, the homunculus emerged not from a predetermined metaphysical hierarchy but from an ongoing negotiation between the alchemist and the lively plasticity of matter. Alchemy’s search for a path to the artificial generation of matter, life, and intelligence rested on the belief that a continuity between natural and artificial was indeed possible, not by overcoming nature and working against its forces but by harnessing nature’s material processes. Today, this question resurfaces at the core of contemporary technologies, from AI to synthetic biology and artificial life, where the boundary between the natural and the technical has grown increasingly ambiguous.

Rather than conceiving of the natural and the technical as rigidly split, devices such as DishBrain construct a productive space where those two preconceived categories materially intermix without any hierarchical prominence of one over the other. The notion of interface, which naturally emerges from a technical discussion of SBI, is especially helpful in excavating this idea. Conventionally understood, interfaces are zones of contact and exchange of matter, energy, and information between two distinct substrates, such as silicon and neurons. In materials science, it is often emphasized that the interfacial space can bring forth behaviors and properties radically different from those of each material in isolation. Contemporary readings of the interface interpret it as an active zone of co-construction rather than a passive space of contact: interfaces are not abstract surfaces but material, multidimensional zones of mutual contamination, where two substrates encounter each other and, as a consequence, have their properties redefined by that very encounter. [17] The paradoxical nature of the interface, therefore, is that while it is originally conceptualized as a union of two separate terms, the terms that constitute it ultimately emerge only as its product. How can we stay within the ambiguous materiality of the interfacial space without collapsing it into either an abstract technical form or an essentialized version of nature—both of which erase the agency of substrates?

In the case of the machine/organism binary, the interface unfolds on two distinct but interconnected levels. First, machines and organisms interface on the epistemological level through co-construction across biology and technology. Second, machines and organisms interface on the ontological level through the embodied integration of biological and technical materialities. Let us focus first on discursive and epistemological co-construction. In Refiguring Life, Keller (1995, x) argues that the language of science is “performative”: the definitions and models scientists use to describe biological reality actively shape the practices and tools we adopt to understand them. This notion of “performativity” highlights the mutual contamination between the embodied realities of machines and organisms and the epistemic models that represent them. If, for example, we understand the brain as a computer, the methods, theories, and instruments we use to investigate it will be molded on this computational metaphor, resulting in the brain becoming “more like” a computer in our eyes. Reciprocally, our understanding of computers, and how they are materially constructed, will be influenced by our knowledge of natural cognitive processes, resulting in computers becoming “more like” biological brains. [18]

“Can it be any surprise,” Keller (1995, 108) writes, “that in the bootstrap process of modeling organisms and machines, each upon the other, not only do organisms and machines come increasingly to resemble each other but that, as they do, the meaning of both terms undergoes some rather critical changes?” The relationship between organisms and machines is less like a mirror and more like an open loop, enabling the open-ended evolution of both our understanding of nature and our engineering practices. Machines and organisms, in other words, are bound not by a metaphysical hierarchy but by a dynamic reciprocity. This epistemological interface operates in the evolution of many contemporary technologies and is especially evident in the history of biological computers. The technical development of SBI was possible only through the investigation, beginning in the 1930s, of the brain’s electrical activity and its correlation with cognitive functions. Today, research in SBI promises to produce crucial insights into the behavior of the brain, enabling the in-depth study of neurological diseases and the development of novel drugs.

Interfaces, however, are not merely epistemic constructs. On the contrary, they easily rupture the boundary between epistemology and ontology, since their products are not merely new theoretical models for technology or nature but new bodies for new machines. By definition, interfaces act materially and horizontally: they hold the power to produce associations across substrates regardless of whether they are artificial or natural. The trajectory of this contamination is not shaped by some transcendent archetype of either organism or artifact but only by the substrates’ power to combine and transform. It is therefore not convergent, approximating an ideal biological or technical form, but contingent, differential, and unforeseeable. In her book Abstract Sex, Luciana Parisi (2004) articulates a similar proposal. Whereas cyberfeminists, chiefly Donna Haraway in her Cyborg Manifesto, have often emphasized the need to deconstruct the idea of nature as a metaphysical locus of purity and innocence, Parisi pushes this project one step further, proposing that in their most intimate, molecular mechanisms, substrates have always already been capable of overcoming their own nature. Parisi begins with recognizing that bacterial life exists in a state of perpetual self-engineering through processes such as cloning, horizontal gene transfer, and endosymbiosis—natural phenomena that have inspired, or even directly shaped, much of contemporary biotechnology. “If cloning has always been part of nature, as bacterial sex demonstrates, then isn’t it natural to clone humans?” Parisi asks. “Are the new bio-technologies of the body already part of nature?” (Parisi 2004, 13)

Parisi’s critique, which unfolds from biology to technology and gender politics, is very relevant for our analysis of the agency of substrates within the organism/machine binary. In Abstract Sex, she argues against both the idea of nature as a superior metaphysical archetype (in my terminology, biomimesis) and the idea of technology as a disembodied function (or technomorphism), proposing an approach that substitutes these two hierarchical models with a horizontal ontology. Different from technomorphic positions, Parisi’s operation does not project technology onto nature, claiming that natural reality is essentially technical, but naturalizes technology, or—more precisely—proposes a space where technical and natural realities become continuous as a product of the inherent agency of matter. She identifies this space—“a machinic phylum of unnatural associations”—with the term “hypernature” (Parisi 2004, 36). In Parisi’s (2004, 37) words:

"Hypernature is not pre-programmed and is not produced by simulations. It is not more natural than nature as it never starts from the knowledge of nature, the primacy of representation over the processes of intensive conjunction between material flows and bodies. Hypernature subtracts nature from the transcendence of the material and the ideal."

Perhaps more than Haraway’s cyborg, the concept of “hypernature” can serve as a powerful tool to navigate the relationship between machines and organisms in the age of biohybrid technologies, in which natural and technical evolution become increasingly difficult to tell apart. [19] It helps us make sense of the growing number of artifacts, such as DishBrain, whose embodiments consist of a literal, material interface of living and synthetic components. Significantly, hypernature rests on a specific understanding of bodies not as passive carriers of natural or assigned identities but as actively capable of mutating and assembling. As Parisi, commenting on Spinoza, clarifies, “the essence of a body is not defined by its properties but by its power to connect or not with other bodies, to assemble to create a more powerful body, to merge to increase or decrease potential” (Parisi 2004, 38). Decoding the machine/organism relationship without reinstating a metaphysical hierarchy of one as an archetype for the other is possible only through an agential understanding of matter, in which substrates are no longer mere supports of function but the evolving, plastic conditions of function’s very existence. [20] Figuratively, agential materialism may be translated into the idea of “letting the substrate speak,” a proposal resonating with both the practices of contemporary synthetic biology and the alchemical cosmology of the homunculus myth.

Substrate Ontologies

Alchemical sources insist that before the homunculus could talk, revealing its intelligence to its creator, the alchemist had to wait for its human skin to grow. The process of fabricating homunculi, as previously discussed, was akin to an accelerated and technically enhanced birth, but this acceleration could not entirely circumvent the time and meticulous care required for the cultivation of a biological substrate. Homunculi, as we have seen, could be produced only through specific combinations of mineral, animal, and human materials, following precise proportions, conditions, and durations dictated not by the alchemist but by materiality itself. While alchemy expressed the possibility of bridging the gap between natural and artificial realities, this possibility was always rooted in the intrinsic intelligence of substrates and their hypernatural capacity for contamination, transformation, and emergence. The question of the substrate is fundamental to any philosophical articulation of the boundary between natural and artificial, especially in the context of AI. While those who affirm a metaphysical supremacy of nature over technology often appeal to the idea of functions as being strictly substrate-bound (for example, “mind is a product of the brain, and the brain alone”), technomorphism insists on the indifference of substrates and the universality of functions (for example, “any computational machine is equivalent to any other”). What sets the world of alchemy apart from both these paradigms, making it especially relevant for outlining a new machine ontology rooted in an agential understanding of matter, is that its substrates are neither immutable nor indifferent: they are open to artificial manipulation while remaining powerful in their own right.

In the context of technology, its models, and its practices, any design approach presupposes, implicitly or explicitly, a substrate ontology: a cultural and operational vision of how natural and artificial are divided and of the role that materiality plays in negotiating that boundary. To decode this relationship, we may—borrowing and to some degree repurposing a term first proposed by Yuk Hui (2024)—adopt a perspective of technodiversity: rather than prescribing a single framework, different substrate ontologies emerge from specific material configurations and processes, speaking to the broader cosmology in which our technical artifacts are immersed. [21] The homunculus, which embodied a view of substrates as agential participants in both natural and technical processes, is far from the only myth centered around the possibility of artificially generating life. To depict a different substrate ontology, it is worth briefly turning to another, widely influential technocultural figure, in which matter is instead framed as a passive carrier of function: the golem.

The golem is one of the most enduring myths of artificial life in European culture, emerging from the Jewish mystical and philosophical canon. According to this tradition, humans can imitate divine creation not by biological or chemical processes, as alchemists would, but through symbolic and linguistic operations. As William R. Newman explains in Promethean Ambitions, “the major goal of the golem was to demonstrate the power of Hebrew verbal magic, the same power that allowed the world to be created ex nihilo” (Newman 2005, 235). The golem myth is embedded in early Jewish mysticism, especially the Sefer Yetsirah (Book of Formation), which represents the act of divine creation as a process of permutating the letters of the Hebrew alphabet. Accounts of the golem’s creation depict it as a human-like creature formed from clay or mud, imbued with life solely by the inscription of a magical word—emeth, the Hebrew word for truth—on its forehead. Such creatures, infamous for turning all too often against their makers, would revert into formless dirt as soon as the first letter of the magical word was erased from their bodies, leaving behind the inscription meth, the Hebrew word for death. Newman articulates the differences between the homunculus and the golem, and their respective technocultural legacies, in the following terms:

"The golem . . . inhabits a different thought world from that of our other homunculi. If it were not rash to draw a modern comparison, perhaps one could say that the golem belongs to the realm of “hard” artificial life, the world of robotics, cybernetics, and artificial intelligence, where ordinary biological processes are obviated or simulated by nonbiological means. The homunculus proper is a child of the “wet” world of in vitro fertilization, cloning, and genetic engineering, where biology is not circumvented but altered. (Newman 2005, 187)"

The lore of the golem points to a specific substrate ontology, largely shared by today’s computational technologies, where the embodiment of the artificial being is entirely dependent on, and ruled by, symbolic logic. The formless mud of which the golem’s body is composed stands in stark contrast to the lively, festering, protean material out of which homunculi were thought to emerge. Just as we framed the homunculus myth as a model for an agential substrate ontology, we may understand the golem as a forerunner of the notion of substrate indifference, a paradigm in which matter is understood as a passive support for the creation and transmission of a disembodied technical “truth.” The significance of the golem myth for cybernetics was not lost on the latter’s founder, Norbert Wiener, who named his book God and Golem, Inc. after this same cultural figure (Wiener 1990). In his book, Wiener engages with the golem figure to reflect on the relationship between technology and life, speculating on the capacity of cybernetic machines to learn and replicate autonomously. In a thought experiment highlighting the subordinate role of substrates in early cybernetic understandings of life, Wiener (1990, 36) went so far as to suggest “that it is conceptually possible for a human being to be sent over a telegraph line.” Although Wiener’s proposal may sound outlandish today (primarily because of the outdated communication infrastructure it refers to), the idea of such radical substrate indifference has persisted and evolved. One of the most interesting recent descendants of this philosophy is the OpenWorm project, which, since 2011, has been pursuing the goal of creating a perfect digital twin of the nematode Caenorhabditis elegans, modeling its body with single-cell precision. The project aims to reconstruct C. elegans’s entire physiology to produce a functional model capable of exactly replicating and predicting its behavior in a digital environment. [22]

We have already seen how technomorphic positions rest on two core assumptions: that functions are universal and that their material instantiations are largely interchangeable. In this view, technical models (for example, “computer” or “machine”) run the risk of becoming metaphysical archetypes, while the specific materiality of substrates is treated as secondary to the functions they support. Technomorphism assumes that substrates, while making certain operations easier or faster than others, are functionally inconsequential, since any computational machine is equivalent to any other. Biological computers, however, evoke a vastly different substrate ontology than their silicon counterparts, just like the homunculus and the golem provide a vastly different answer to the same philosophical question. To articulate why that is, let us consider two aspects of biological computers that complicate the relationship between computation and its substrates: (1) nano-intentionality and (2) onto-/phylogenetic diversity.

The term nano-intentionality is borrowed from neuroscientist W. Tecumseh Fitch, who proposed a materialistic account of cognition based on the embodied plasticity of biological matter (Fitch 2008). According to Fitch, the problem of understanding how brains produce subjective experiential states (a power often referred to with the term “intrinsic intentionality”) can be attributed to the capacity of all eukaryotic cells, from amoebas to the neurons in our brains, to perceive their surroundings and respond adaptively by modifying their morphology. Human-scale cognitive powers, therefore, are the result of an extensive stratification of nano-intentionality, which originated as a primordial evolutionary strategy in single-celled organisms. “An amoeba changes its structure, moving about ‘seeking’ nutrients, engulfing food particles, following chemical gradients in a purposive fashion,” Fitch (2008, 166) explains. “Such behaviour provides the prototypical example of what I’m calling nano-intentionality.” Fitch insists on this embodied plasticity as an attempt to identify the “missing link” between human cognition and silicon-based AI. While cells can adaptively modify their morphology in response to external circumstances, transistors are engineered as hard substrates and designed to remain morphologically unchanged. In questioning the cognitive equivalence of silicon and biological substrates, Fitch also advances a critique of technomorphic positions, arguing that while there is nothing mysteriously “special” about living organisms, there are profoundly consequential differences between machines as we know them and biological bodies.

"[T]he “cell = machine” truism obscures an equally important fact: that a eukaryotic cell is unlike any machine ever produced by humans. A cell has specific, causal powers, possessed by no currently available machine, and it is these powers I wish to bring into focus with the term “nano-intentionality.” Eukaryotic cells respond adaptively and independently to their environment, rearranging their molecules to suit their local conditions, based on past (individual and species) history. A transistor or a thermostat does not—nor do the most complex machines currently available. This is a practical difference between cells and machines (I can see no reason in principle that machines must lack such causal powers), but it is nonetheless a profound one. (Fitch 2008, 159)"

Biological computers such as DishBrain introduce an additional layer to this discussion, showing that nano-intentionality is not a prerogative of natural bodies, but that it is possible, in principle and in practice, to design “nano-intentional” devices. While silicon-based AI relies on its substrate remaining unchanged (and interfering as little as possible with its operations), SBI relies on the power of substrates to self-organize and form dynamic connections autonomously. In the case of DishBrain, researchers observed the emergence of functional connectivity among neurons even when they were spatially distant from one another, demonstrating that biological computers respond to environmental stimulation and training with nanoscale, substrate-level plasticity. An additional implication of nano-intentionality is that the plastic behavior of substrates, as Fitch notes, is “based on past (individual and species) history.” In the context of biological computers, this diversity is reflected in functional and behavioral differences based not only on training protocols but also on the different types of neural cultures employed. DishBrain was shown to perform differently depending on the cell culture’s origin, even when the electrical inputs and training procedure remained unchanged (Kagan et al. 2022). While mouse neurons were shown to be better at playing the game initially, human neurons appeared capable of faster learning, seemingly engaging in more “explorative” in-game movements. The system’s intelligent behavior, in other words, preserves an inherent contingency, since it depends on the specific physiology of the biological neural network, which in turn is determined by ontogenetic and phylogenetic factors unique to each cell culture.

These substrate-dependent features of SBI are not just a matter of functional efficiency but embody a radically new engagement with substrates, profoundly transfiguring what a “computer” is and what it does. They evoke a machine ontology where substrates are neither indifferent nor powerless: through active reconfiguration of their material makeup, they can engender specific and differential forms of intelligence. This conclusion problematizes the two dominant substrate ontologies associated with technomorphism and biomimesis. In technomorphic positions, life and intelligence are treated as “substrate-indifferent,” as formally identical in function regardless of the body in which they are instantiated. Biomimesis, by contrast, assumes that such functions are “substrate-bound,” that is, the exclusive product of a specific biological embodiment: substrates serve as hosts for a function that is either present or absent but does not transform alongside its material support. The embodied reality of biological computers points to a third substrate ontology, unbound from both these metaphysical expectations, in which substrates actively give rise to parallel forms of life and intelligence, each shaped by embodied specificities and differences.

From the inert clay of the golem to the lively plasticity of the homunculus, myths of artificial life and intelligence have long staged competing substrate ontologies. The legacies of these approaches linger in both the architecture of our machines and our broader understanding of the boundary between technology and nature. This essay has followed the technocultural legacy of alchemical homunculi to explore the role of substrates in contemporary machine ontologies, focusing specifically on the most recent advances in biological computing. It has shown how our understanding of the boundary between biological and artificial reality tends to install a metaphysical hierarchy in which one—either the natural (as in biomimesis) or the technical (as in technomorphism)—serves as an archetype for the other. It argues that neither of these models can ultimately account for the discursive and material entanglements of natural and artificial entities produced within today’s technoscientific practices. The possibility of replacing these hierarchies with a horizontal ontology, one where interfaces between the biological and the technical serve as active spaces of invention, depends on an agential understanding of substrates, where matter is neither the passive support for a universal function nor the preordained host for an inscrutable natural essence.