By Dr Llewellyn Cox, Principal, LieuLabs
Why Your Mind Will Never Be Uploadable.
The rise of computing and the exponential rate of increase in processing speed pose an intriguing questions for scientists; what happens when computers become smarter than their human masters — the so-called “Technological Singularity”. Will machines take over? Would they turn on humans and destroy us? Will we increasingly integrate computing modules into our bodies to cure disease or enhance our natural abilities? What effect does all of this technology have on issues of equality and power among the various members of the human race?
One of the popular philosophies attached to this futurist realm is the idea of transcendence (now a major motion picture!). That is, the concept that a person’s mind could be digitally uploaded to a computer, thus “transcending” the limitations of the biological body to acquire immortality. It’s a relatively widespread idea that has been written about extensively in popular science literature, even making a cameo in an episode of “The Big Bang Theory”. It is also a major theme for followers of Transhumanist philosophy. As the world focuses more and more on the implications of our technological development, transcendence is becoming one of those concepts in popular science that are so widely-known that they become accepted as inevitable, regardless of the state of current scientific knowledge regarding their actual feasibility. Fortunately, no one reading this in the early 21st Century will achieve immortality by uploading their mind to Amazon.
The Science of Transcendence
The concept that the the human brain could be uploaded to create a digital self rests on some basic assumptions: first, that the human mind works like a computer, and thus can be reconfigured by and as one; second, that by completely mapping the activity of every neural circuit in the brain, we could create and save a copy of your thought patterns that would represent an exact model of your mind; and third, that if uploaded to an adequately powerful system, this process would effectively transfer your consciousness to the digital realm. None of these three assumptions are valid when critically assessed from a scientific standpoint however.
The Human Computer
The human brain is often compared to a computer, and on many levels this is a fairly accurate metaphor for how it functions. Nerve cells (neurons) collect data about our environment — what we see, hear, feel, taste and smell — and transmit it to other neurons in the brain or spinal cord. In turn, several sets of neurons process the data, and some then send out signals instructing the body to respond to what it has just experienced. In this analogy, each neuron represents a computer chip, processing information and passing it along to the next series of chips in the mainframe for further action. The calculations that people have made for the arrival of the Technological Singularity — the point at which machines surpass human intelligence — is based on attempts to quantify the power of the human brain. This can be done by multiplying the number of neurons in the brain (about 86 billion), the average number of connections (about 6,000 connections, or “synapses” per neuron), and their maximum firing rate (about 200Hz — 200 signals per second). By comparing this estimate of processing power to the current rate of growth in computing speed, various futurists have predicted that the Singularity will occur sometime in the mid-21st Century.
Assumptions of Processing Growth
For the past few decades, processing speed and thus computing power have been increasing exponentially, according to what is known as Moore’s Law. Moore’s Law isn’t actually a scientific Law as such, inviolable like Newton’s Laws of Thermodynamics — it is simply an observation on current rates of progress that has held true for a limited period of recent history. Moore’s Law (named after Intel co-founder Gordon E. Moore) is the observation that the number of transistors on integrated circuits doubles approximately every two years. Since his paper describing this phenomenon in 1965, this has held true. However, even Gordon Moore himself sees a limit to the trend; as more and more transistors are added to a circuit, they necessarily become more miniaturized. Eventually, we run into a fundamental size limit, such as when each component reaches the size of a single atom and cannot be shrunk any more. Another problem arises as components shrink to the point where they approach the size of individual molecules or atoms: at this scale, noise from random quantum events begins to interfere with the ability to accurately process calculations.
There is a lot of current research into how computing can account for quantum noise, which remains a key roadblock for the development of effective quantum computing technology. If scientists can overcome this problem, then quantum computing may hold the key to sustaining the rate of growth in computing power. However, even allowing for an imminent critical discovery, the application of quantum computing outside of research labs is still decades away. It is not impossible that quantum computing can enable Moore’s Law to hold well into the future, so in the style of Mythbusters, we will leave this particular question open as “plausible, but unlikely” for now.
Underestimation of the Brain’s Complexity
Today’s Neuroscience is 100 years’ time phrenology. The analogy of the brain as a biological computer, where individual neurons represent chips that process data and send out appropriate signals in response, unfortunately runs out of applicability somewhere in the 1990s with the advent of molecular imaging and three-dimensional tissue modeling. Just like a processor chip, the nerve cell receives signals from other cells. This data is then passed electrically along the nerve cell, and leads to the cell sending out signals of its own to other neurons. However, in the brain, the connections between cells are not fixed.
It has been known for decades that each neuron constantly modifies the strength of each of its thousands of individual synapses’ relative strength. However, we now know that nerve cells also create new and destroy existing synapses with their neighbors constantly. We also know that even in adults, nerve cells are constantly dying and being replaced by new ones, with their own de novo connections to other cells. In addition to all this, there are hundreds of billions of non-neuronal cells (glia) in the brain, which have also been shown to have an active role in modifying the strength and number of connections between neurons. This astonishing cellular architecture poses fundamental problems for people hoping to recapitulate all of a person’s neural processes into non-biological systems.
The brain’s signaling functions are so intimately connected to its underlying structure, even on a subcellular level, that it is hard to imagine faithfully recreating the brain’s total activity without also preserving this underlying biological complexity.
Each one of the brain’s billions of neurons individually creates its own array of thousands of electrical connections, which are endlessly being added, re-tuned, and removed. Nerve cells are continually dying and being replaced. All of this happens constantly, seamlessly, in every part of the brain and without any effect whatsoever on its global functioning. In addition to the incredibly complex 3-dimensional structure of the brain that would require perfect mapping of its trillions of synapses in Transcendence, these local modifications are so widespread and so common, happening thousands of times per minute in every neuron, that the brain essentially has a unique 4-dimensional structure; the global map of every connection in your brain is now thousands of times different to when you began reading this paragraph. Moreover, due to this incredible complexity of variations in your brain’s cellular structure, it will never take on that exact same configuration again; your brain, right now, is unique in history. No other brain, even your own, has ever been or will ever be constructed exactly the same. For a non-biological computer to reconstitute this incredible complexity would require the invention of a processor that could constantly and autonomously reconstruct all of its connections to other processors in the machine, or such a massive proliferation in the number of processors dedicated to a single chip function as to allow for mapping of signaling activity across space and time for every one of its core processors’ trillions of individual connections to others.
Underestimating the Complexity of Neuronal Signaling
So far, we have indulged the analogy of neurons to computer chips in the sense that they process and pass along electrical signals from their neighbors. However, this is only part of the story about how the brain processes information.
In the mid-2000s, I worked in the lab of Dr Samie Jaffrey at Weill Cornell Medical College in New York City. We were studying the process of nerve cell development — specifically how the long cellular extensions (axons and dendrites) that connect neurons to other cells find their way to their appropriate targets and make synapses. In short, the question we tried to answer is how the growing tip of an axon is able to modify itself in order to grow toward, and form a synapse with, its target cell. These processes require significant molecular and structural changes in the axon’s tip, which is located far from the main body of the cell that houses the genetic instructions and biological machinery that are necessary to effect many of these changes.
As the axon tip nears its target, it senses chemicals produced by the target tissue. In response to these chemical cues, molecular machines in the axon synthesize proteins that are trafficked all the way back to the cell’s nucleus, where they modify gene expression patterns. In this way, the cell “knows” it has found its target and initiates the necessary biochemical responses to create a synapse. In turn, cells that do not receive this signal in good time die off. The brain produces many more neurons than it will ever need, and only those that reach the correct targets during development survive — this prevents neurons making inappropriate connections early in development and underpins the integrity of our brains’ incredibly complex architecture.
In addition to the electrical signals that pass between and flow through neurons, this is an example of a second, slower, and far more complex analogue signaling architecture that operates alongside and underpins the future cellular processing of all the electrical activity in each and every neuron. When neurons receive signals from a synapse, in addition to firing off electrical action potentials, they also produce a range of signaling proteins inside the cell — some of which affect the local environment and strength of the synapse, while others are dispatched to the cell’s operating center in the nucleus to modify gene expression levels.
In this way, a slower (seconds to minutes, rather than milliseconds), more complex, and longer-lasting response is processed by each cell for every signal it receives. The conceptual idea of transcendence, reducing the brain to a simple collection of 86 billion processors, gives little to no thought about the importance of this parallel, analogue signaling cascade that accompanies and moderates all the electrical signaling throughout the brain. In order to effectively copy every signal in the brain to a computer, the machine would have to account for all of this chemical signaling inside cells, and faithfully model its effects, in four dimensions, for every neuronal connection in the body. We are still in the very early stages of understanding how all of the metabolic, protein synthesis and trafficking, and gene expression pathways in neurons interact to produce long-term modifications in the cell’s responses to signaling.
The Biological Interface
Beyond the absence of a capability to adequately model the completeness of electrochemical signaling in every nerve cell in the brain, the second problem for the concept of Transcendence is the question of upload. Popular science fiction in this area sees advancements in brain scanning leading to a complete picture of signaling in the brain that can be stored and reconstituted by a computer. While advancements in our ability to image the signaling activity in our brains has led to huge advancements in our understanding of how the brain works, Transcendence would require accurate mapping to the level of the individual synapse of every single electrical signal. As we have seen, it would also be necessary to map the subcellular architecture and metabolic profile of every single nerve cell in order to understand its internal analogue processing state.
It is not beyond the realm of feasibility that brain imaging could progress to this point within our lifetimes, although it is unlikely that internal cellular activity could be analyzed so accurately, across so many billions of cells simultaneously, without causing significant damage to the tissue from the intensity of radiation or magnetic polarization that would be necessary for such a task. How else could transcendence upload consciousness from the brain to a hypothetical computer? Amazing strides have been made in the last decade toward creating much more effective bio-mechanical interfaces; this is especially evident in the development of high-tech prosthetics and sensory replacement devices. However, if our aim is to upload a complete representation of the self, then such an interface would surely have to interact with every active circuit in the brain. Again, while not impossible to imagine, the technical barriers to doing this while leaving brain function and signaling fully intact seem insurmountable given our current knowledge of neuroscience.
The Problem of Consciousness
Ultimately, the goal of Transcendence is the transfer of the conscious mind from the brain to a machine. While most scientists will agree that the higher reasoning powers and self-awareness that we call the mind is defined in the brain, we have very little idea of how the cellular architecture of the brain translates to a conscious concept of self-awareness. More accurate modeling of the signaling pathways in the brain may lead us to a greater understanding of how the brain conceptualizes self-awareness, but we are a far way off from understanding how biological systems can develop consciousness. Even if we were able to program a super-computer to effectively emulate self-awareness, how would we know if it were real, or an illusion created by the program? Computer pioneer Alan Turing addressed this dilemma in 1950, proposing the “Turing Test” (he didn’t call it that himself), a series of experiments to determine whether a computer can think for itself as an autonomous being. Since then, there has been much debate of whether, and when a computer will be able to pass a Turing Test — even a $20,000 wager that it will/won’t be done by 2029! IBM’s supercomputer ‘Watson’ can evidently think — it won at Jeopardy! against human champions, but no one involved with the development of its ‘better than human’ thinking skills would claim that Watson is in any way self-aware. It seems that we have a long way to go in understanding what consciousness really is, before we are ready to reconstitute it digitally.
A fundamental issue with regards to the transfer of the mind in Transcendence is another aspect of consciousness — the concept of self or ego. Even if we are able to overcome all the technological and conceptual problems with reproducing a human brain in digital form, the idea that the conscious could be transferred to a computer stretches the imagination of how we define the self. Proponents of transhumanism might claim that the exact copy one a person’s brain functions into digital memory will allow them to “transcend” the biological self, but what happens to the original self? Maintaining two distinct versions of the same mind at the same time would violate our concept of individuality, which is central to defining a self. If the original mind is destroyed during the upload, we could possibly overcome this paradox, but who would be willing for their brain to be biologically destroyed in order to find out if it really is your mind in the machine, or just a very convincing illusion for people who interact with it? If the original brain persists, the copy must remain a copy, and the original remains the original. In this case, the copy (the digital version) is simply a recreation of the state of the original at the time of replication — an entity more akin to a twin than the self. Biological [identical] twins begin life as exact copies of each other, yet become more distinct as they age, grow, and are subjected to different experiences as they progress through life — yet no one seriously doubts that they are, and always were distinct individuals. In the same way, your computerized transcendent copy might begin life thinking and acting identically to you, but would change and adapt differently as it gains more individually distinct experiences. In this case, did you create a twin, or an alternate-history version of yourself?
Your brain, your mind, your self, are an amazing and unique creation that define an individualized concept of You in the time and space of the Universe. No brain has ever been exactly the same as yours, nor will ever be again — not even your own. It is only natural that one of the earliest and most enduring desires of human experience is the dream of throwing off biological constraints to live in immortal bliss — yet, it is our own very biological brain that gives us the self-aware mind to feel this desire.
Technological developments are rapidly allowing humans to overcome limitations to their physical selves; cochlea implants and “smart prosthetics” are already allowing the deaf to hear and paralyzed to walk. In the near future, it is not unimaginable that we will use many more devices to complement, augment, or replace our bodies’ own biological functions. The rapid developments in wearable technology and implantable devices give good reason to think that we will all be cyborgs soon, at least to some degree. However, all of these technologies are many orders of magnitude more simple than complete replication of an individual’s thought patterns, and their organization into a digital self that would be indistinguishable from the original. It seems far more likely that your brain will be the last biological survivor in a cyborg body, where you will simply replace old, worn-out parts, than for it to be digitally uploaded to the cloud like your vacation pictures. You might still get to live forever, but it won’t be as an algorithm.