Since the beginning of mankind, we have been fascinated by immortality. Many have tried, from the pharaohs of ancient Egypt to devout followers of man-made religions. Yet, eternal life remains elusive.
In the Hollywood sensationalized movie The Imitation Game, based on the life of Alan Turing, considered the father of theoretical computer science and artificial intelligence, the movie explores the notion of living forever through an intelligent system. And as mortality nears, many baby boomers are looking to science.
According to baby boomer Ray Kurtweil, the author of The Singularity is Near and Director of Engineering at Google, he stated at the Global Future 2045 International Congress that by 2045 we can reach immortality. Humans will be able to upload their entire brains to computers and become digitally immortal.
How are wearables and the Internet of Things accelerating the trend towards immortality?
A new generation of lifelogging cameras and drones are enabling first-person and aerial view recordings that persist in the cloud. Autographer is a wearable camera capable of shooting up to 2,000 shots a day while worn around the neck or clipped onto clothing. Narrative shoots two photos a minute and tags the location using built-in GPS. Nixie, a wearable and flyable drone camera, unfolds to create a quadcopter that flies, takes photos or video, then comes back to you. Trace allows users to record a third-person view of themselves, hands-free. We are closer than ever of being able to record our entire lives from birth to death.
Then the visual narrative of your life can be re-experienced vividly through a virtual reality headset for immersive 3D experience.
Cloud-Based Social Services
Lifelogging apps such as LifeLog, Reporter, Day One, Saga, Narrato, Path,OptimizeMe, and HeyDay complement drones and wearables cameras by making digital autobiography effortless by integrating your social networking updates and photos, syncing with the cloud, and adding automatic metadata such as location, weather, date, time, movements and/or music choices.
Even Facebook gets lifelogging with their Year in Review feature that shows your “biggest moments” in the past year. Twitter, albeit cumbersome, now lets you download your archive of tweets and browse them by month.
Then there are wearable devices with built-in physiological sensors. We are still early in the sensor product development lifecycle but we can anticipate that there will be sensors to measure just about everything, including interstitial body fluids, that can in the future rival the best of home medical diagnostics. So what does this mean? It means that you can record a lifetime of physical responses, everything from EEG, heart rate, blood pressure to EKG that corresponds to every moment of your life.
In fact, the MIT Media Lab project Inside Out: Reflecting on Your Inner Statepushed this very boundary by using a wearable system consisting of a biosensor and smartphone camera to measure physiological responses and capture visual context. A timeline of mosaic visualizations linking physiological and visual imagery of a day is then transmitted to a digital mirror interface to enable a person to reflect on his/her daily activities. We can extrapolate that a user can rewind to a special moment of past and see (through an immersive 3D VR headset) and feel (by recreating physiological responses in the body) what s/he felt.
The potential of sensors is only just emerging. Examine Apple iOS 8 HealthKit type identifiers. Though many sensors are simply not there yet, Apple has begun to create a framework to handle future health and fitness data, ranging from 1) body measurements: body fat percentage, body mass index, height, lean body mass, and weight, 2) fitness: active calories, cycling distance, flights climbed, NikeFuel, resting calories, steps, and walking+running distance, 3) Nutrition: biotin, caffeine, calcium, carbohydrates, chloride, chromium, copper, dietary calories, dietary cholesterol, fiber, folate, iodine, iron, magnesium, manganese, molybdenum, monounsaturated fat, niacin, pantothenic acid, phosphorus, polyunsaturated fat, potassium, protein, riboflavin, saturated fat, selenium, sodium, sugar, thiamin, total fat, vitamin A, vitamin B12, vitamin B6, vitamin C, vitamin D, vitamin E, vitamin K, and zinc, 4) results: blood alcohol content, electrodermal activity, forced expiratory volume, 1 second, forced vital capacity, inhaler usage, number of times fallen, oxygen saturation, peak expiratory flow rate, and peripheral perfusion index, 5) sleep analysis to 6) vitals: blood pressure, body temperature, heart rate, and respiratory rate.
One Step Closer to Immortality?
At some point in the future, with all this exhaustive data set of your every moment, encounter, and physical responses, science may be able to stitch the video stream with physiological data to transfer to a new body, perhaps a human clone of yourself developed from your DNA or a robotic substrate. Does it sound far fetched? Perhaps.
Some futurists and transhumanists believe that through mind uploading or whole brain emulation (WBE) one can achieve immortality. In theory, the brain can be disassociated from the body, thus no longer limited to the lifespan of a biological body.
The human brain consists of 300 million pattern recognition modules and about 85 billion nerve cells in its neural network. Synapses or signals at the junctures between neurons are transmitted by the release neurotransmitters. Neuroscientists believe that functions such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain.
Scientists propose two general methods for WBE: copy-and-transfer or gradual replacement of neurons. In the copy-and-transfer method, mind uploading would be achieved by scanning and mapping the brain, and then copy, transfer, and store the simulated mind into a computer system. The other approach is a gradual transfer of functions from a biological brain into an exocortex, thus evolving the human species into cyborgs or transhumans.
WBE relies on the idea of neural network emulation. Rather than having to understand the high-level psychological processes and large-scale structures of the brain, and model them using classical artificial intelligence methods and cognitive psychology models, the low-level structure of the underlying neural network is captured, mapped and emulated with a computer system. In other words, rather than analyzing and reverse engineering the behavior of the algorithms and data structures that resides in the brain, a blueprint of its source code is translated to another programming language.
Many theorists have presented models of the brain and have established a range of estimates of the amount of computing power needed for partial and complete simulations. Using these models, some have estimated that mind uploading may become possible within several decades if trends such as Moore’s Law continue.
Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons along with the considerable complexity of each neuron. Henry Markram, lead researcher of the Blue Brain Project, initially stated it will be difficult to emulate because every molecule is a powerful computer and neuroscientists would need to simulate the structure and function of trillions of molecules as well as all the rules that govern how they interact. In 2009, however, after successful simulation of part of a rat brain, Markram claimed that a detailed, functional artificial human brain can be built within the next 10 years.
Now of course mind uploading doesn’t guarantee that a biological clone, humanoid robot or simply referred as an “Upload,” will perceive the simulated mind as it’s own consciousness. Consciousness has been described as sentience, awareness, subjectivity, the ability to experience or to feel, having a sense of selfhood, and the executive control system of the mind.
Would an Upload possess self-awareness or the capacity for introspection and the ability to recognize oneself as an individual separate from the environment and other individuals? Will it be able to examine one’s own conscious thoughts and feelings?
The philosopher and transhumanist Susan Schneider claims that it would create a creature who is a computational copy of the original biological mind, but this doesn’t mean we can upload and survive. It would only create an illusion that the original person is still alive. Schneider states that “it’s implausible to think that one’s consciousness would leave one’s brain and travel to a remote location.” Are we to assume that an Upload is conscious if it displays behaviors that are highly indicative of consciousness? Are we to assume that an Upload is conscious if it verbally insists that it is conscious? The mystery of consciousness precludes a definitive answer to this question. Numerous scientists, strongly believe that determining whether a separate entity is conscious is fundamentally unknowable, since consciousness is inherently subjective.
What’s potentially missing from the academic discourse is the inner voice or verbal stream of consciousness in thinking words. Our human experience is more than just visual imagery, physical responses, and electrochemical processes in the brain. Accompanying on our every action and thought is our internal monologue that we have with ourselves at a conscious level.
For instance, your first date can’t be simply recreated with a video and physical stimuli. You’re racing in your mind what to say to your date. Should I say that s/he looks nice? How should I start the conversation? How should I act? How will I be perceived? What actually comes out in an audible manner is “Hey, you must be hungry. Want to go grab some dinner?”
Does inner speech reside in our neural networks? Do the millions of pattern recognition modules in the brain accurately contain and persist inner dialogues as part of memories or do they vanish as quickly as they come? If the latter, could an Upload really know what is/was like to be you?
One argument for mind uploading is space travel. After all, Earth won’t be habitable forever. (1.75 to 3.25 billion years according some astrobiologists, assuming we don’t destroy it first.)
Interstellar space travel between stars are typically hundreds of thousands of astronomical units or within 25 lights years of the Sun to reach 59 known stellar systems. Among the methods for survival in interstellar travel are suspended animation, extended human lifespan, frozen embryos, mind uploading or a combination. Assuming light-year travel propelled by antimatter rockets or nuclear fusion rockets are feasible in the future, the embryo space colonization approach can be utilized to create a new human colony on a habitable terrestrial planet, raised by robotic nannies. It’s also possible that some of the greatest intellectuals on Earth can live on via mind uploading, who can help bridge mankind’s collective knowledge with the new colony.
Ethical and Legal Implications
Brain emulation surfaces unpleasant questions. Should this new generation of transhumans recall the past atrocities of their forefathers — wars and genocides? Would deleting these memories in Uploads create a utopian society? I argue that by the very nature of humans, even in this new breed of transhumans, the sin nature of greed, pride, and envy lurks deep within the scanned and mapped simulated brain. No amount data scrubbing and transformation before upload can change that.
Or worse, as Kurzweil predicts strong artificial intelligence supersedes human knowledge, also known as technological singularity. What if they have the final say on what memories we are allowed to receive? Lacking conscience, what if smart machines deduce that humans should be terminated because of our sinful and violent nature. Who will be the master then?
What about legal rights for Uploads? Should they have the same inalienable rights as biological humans and therefore under the protection of law?
The Big Question
The ramification of wearables and the Internet of Things are far greater just enabling sensor data and communication between people, objects, and machine-to-machine. It has the potential to extend life itself.
Perhaps the question shouldn’t be how can we live forever but rather should we live forever?
Originally published on the Wired on February 6, 2015. Author Scott Amyx.