Emmortality: The Project of a Lifetime
Immortality has been pursued by humanity through religion, science,
and philosophy for centuries. Immortality has often been the domain of science
fiction plots and fantasy. In the digital age, immortality mutates into
emmortality: the use of electronic media to emulate a person in perpetuity. In
the development of this model, there are several psychological concerns (e.g.,
consciousness, perception, thinking, reason, etc.) as well as philosophical
quandaries (how would this change our cosmology?) that shall be addressed. The
technology for this may be the easy part.
This proposal comes with a few basic caveats or limitations. First of
all, this is not a concept of immortality that a subject gets to enjoy - it's
for the "benefit" of others (which is, arguably, what immortality is). This is
not a cryogenic, Golem-like, or Frankenstein-esque solution. Instead, it is a
focus on technology, not biology.
Paul Ryan notes that "'Immortality' depends on the human practice of
remembering the dead" (1991, p. 225). Allucquere Rosanne Stone said, "...it is
important to remember that forgetting about the body is an old Cartesian trick,
one that exacts a price from those bodies rendered invisible by the act of
forgetting..." Humans can keep someone alive in their memories; memories can be
prompted by an ideographic circumstance, photo, or video which leads into
bittersweet remembrances. George Berkeley, an 18th-century British philosopher,
similarly defined existence as being perceived.
But what if one could run a program that would simulate the advice a
long-dead grandfather would have given to an unborn grandchild? What would a
deceased spouse counsel in a time of crisis? I am suggesting the use of
technology to create what I call the Emmortality Program. The Program would
emulate its user, and, when the user no longer has a cooperative body (i.e.,
after death), the Emmortality Program would substitute for the departed human.
It would be ideal to maintain the person in a functional state, but medical
technology has, thus far, been unsuccessful in achieving this objective.
I am assuming that such medical technology would yield an inefficient
use of resources in keeping the body alive. I'm suggesting consideration for
maintaining the mind instead; hence, emmortality (an emulation of it). It may
seem a little macabre, but personally, I would love to interact with a PC
version of my grandparents. I know that an Emmortality Program would not be
them, per se, but an emulation. Nevertheless, they'd seem almost as real as
anyone else with whom I communicate via e-mail.
Methods and Technology: Is It Alive?
Most of what I discuss herein already exists, and what doesn't, will.
Again, this is not science fiction, it's science proposition. We already have
personal digital assistants (PDAs) that "learn" to make presumptions about
numerous patterns of user activity. Chris Langton and Doyne Farmer, pioneers in
a field known as Artificial Life, note that many basics have already been worked
out. For example, John Holland has developed a classifier system, and as Langton
notes, there are "...other systems that use genetic principles in order to
search large problem space to help find optimal solutions or to help find better
solutions than the ones we know. They are using principles imported from
biology, those of mutation and genetic recombination - programs that are
represented in such a way that most of the operations we do with them will
result in viable programs... Another example that comes close to being alive is
computer viruses, which satisfy a lot of criteria for living things" (p. 6).
The mathematician, Norbert Wiener, coined the term "cybernetics" to
refer to self-regulating machines. In his 1948 book, Cybernetics: Control and
Communication in the Animal and the Machine, he examines the likenesses
between animals (including homo sapiens) and machines. Hardison pointed
out that it is natural to compare cybernetic machines to humans. This raises the
question, "would cybernetic machines possess self-awareness?" It's not that they
would be self-aware per se, but the Emmortality Program could be designed to
mimic self-awareness via some Goedel-type algorithm. Such machines already
provide a handy "three-dimensional metaphor for self-awareness" (p. 294). This
technology is nothing to fear. J.G. Ballard has iatrogenically defined personal
computers as the brain's "...subcontracting of many of its core functions,
creating a series of branch economies that may one day amalgamate and mount a
Farmer adds that, "Lifeness should perhaps be thought of as a
continuous property. To me, a machine is a little more alive than a rock and
probably less alive than a virus, which is less alive than a bacteria, which is
probably less alive than [a human]. But nature can throw an Avogadro's number of
computers at something because it's got zillions of molecules, all of which act
like independent parallel processors. We really can't do that. We don't have
that kind of computing power at our disposal, so we are forced to make these
abstractions where we take an aspect of something out and build a little model
around it that does what the original does, and so we have models of living
things" (p. 7). It is one of these little models that can be made to emulate
someone into emmortality. This may be considered the ultimate in inbreeding:
self-regeneration without cloned progeny.
Individuals develop via various cognitive inputs and by data
processing. Little that contributes to one's attitudes, beliefs, or opinions, is
innate, although such reductionism may be distasteful to some. A PDA-like
program would act as an intelligent agent for a person - by reading what the
person reads via e-news, having a repository for what was browsed on the net,
and being provided with various psychological and social history data about the
user (via automated psychological tests and programs). Interactive learning
simulations would provide various neural connections and associations between
the human user (or "domain expert"), the program, and the world. Thus, a
database would be constructed from the individual human user's baseline (or
history) and then be updated for as long as the user lives. Furthermore, it
would continue to develop via new inputs of continuing world developments on the
macroscopic level and family/community updates on the microscopic level.
Periodically, the Emmortality Program and the person would be provided with
random questions and situations, and a comparison of the Program's responses
would be evaluated in comparison to the human's. Program differences within a
certain tolerance would be tweaked to better match the user and provide
opportunities for the Program to learn. In a sense, it would be an ongoing
Turing test, but with benevolent cheating opportunities.
Thus, with the causal impact of ongoing environmental changes
post-mortem, there would continue to be input - as the domain expert/user would
have established the information drivers pre-mortem. The Program would be able
to continue with a robust cognition - both aware of and responsive to change in
the world. Psychology instructs us that associative memory (i.e., learning) does
not require consciousness, just some good cause-and-effect scenarios.
Within artificial intelligence's domain, Neuron Data in Mountain View,
California has a knowledge-based program that exploits various genetic
algorithms and neural networking. It is being used for various tasks ranging
from detecting bank fraud to making triage decisions in emergency rooms. There
is also the Connection Machine, which as Farmer describes it, "is a physical
system that is designed to simulate other physical systems...(which begs the
question)...is there a threshold of complexity that we have to reach in order
for something to behave as though it were alive?" (pp.10-11).
Almost a decade ago NETtalk was created by Sejnowski and Rosenberg.
This parallel network program exploited a minuscule 231 "neurons" in a
self-organizing algorithm. It taught itself to talk after being provided with
rudimentary phonetic elements. Kinoshita and Palevsky described its linguistic
development "like a child, the network starts out untrained, and produces a
stream of meaningless babble... The continuous stream of babble first gives way
to bursts of sound, as the network 'discovers' the spaces between words... After
being left to run overnight... NETtalk is talking sense." That was almost a
decade ago. Certainly the current ability with non-linear relationships (what
these neural networks are made of) is a good starting point. Back-propagation
allows for successful machine learning at our current level of sophistication.
We can even add the "noise" of additional, non-sequetorial interests that may be
subtle but nonetheless contribute to what makes a person unique. Katia Sycana, a
professor at Carnegie Mellon University, has developed such noise-makers in a
financial decision-making AI program via intelligent, and talkative, autonomous
agents. Hardison provides a good tutorial:
The creation of an expert system is analogous to memorizing.
Conversely, the learning that occurs in certain kinds of parallel systems is
like the programming that the mind seems to do for itself as a result of
interaction with the environment during infancy. This is because parallel
systems can be designed so that the strengths of connections between their
modes are created by the data received. For example, a connection used
frequently can have its electrical resistance lowered; one used infrequently
or not at all can have its resistance increased. The changes favor one set of
connections while interdicting others. The process seems to resemble the
creation of associative patterns in the brain. Through the development of
these patterns, neural networks can be, to a certain degree, self-organizing,
and what is organized is a crude internalization model of a fragment of
reality. (p. 310)
You see, self-organizing, parallel systems are the ultimate cybernetic
"do-it-yourselfers." Humans design such systems to a finite point, and the
machine takes it from there.
Minsky (in The Society of Mind) argues that what occurs in
human cyberspace - the mind - results from a culture "of special-purpose units
and interdisciplinary controls. If so, many of the basic modules must be created
by environmental stimuli that share neuron connections as the (human) brain
develops. They are in this sense self-organized, and presumably they often
operate in parallel rather than in serial ways" (p. 310).
What kind of machine could run an Emmortality Program? Let's examine
the nomological hardware needed to run cognitive processes. Typically accepted
contemporary conventions for brain speed are that when 1%-10% of a given brain's
neurons are firing at any one time they do so at a rate of about 100
times/second. The speed of such things is measured in FLOPS (Floating Points
Operations per Second). This refers to the time needed to add, subtract, divide,
or multiply two numbers that are expressed in scientific notation, e.g., if
calculating 3.82x10^7 + 4.57x10^6 = 4.28x10^7 takes ten seconds, then ten
seconds =1/10 FLOPS. If one neuron is equivalent to one FLOP, then 1% = about 10
gigaFLOPS. If one synapse is equivalent to one FLOP per firing, then 10% is
about 10 tera (trillion) FLOPS. About ten million FLOPS would be the upper limit
for the power required to simulate a single neuron, therefore 100,000 teraFLOPS
would be needed to simulate an entire brain. Moravec's estimate is about 10
teraFLOPS. However, one would not need a whole brain emulated for an Emmortality
Program - what's the use of occipital lobes or structures maintaining primitive
bodily functions such as respiration, heart rate, glandular activity, etc.?
As for storage, if one neuron codes one bit and the brain has 10^10
neurons, then 10^10 bits would be needed. If every neuron has about 10^5
connections with other neurons, and every connection codes one bit, then the
number of synaptic connections in the cortex and cerebellum is 10^15.
Tipler postulates that 10^15 bits processed at a speed of 10 teraFLOPS
is a brain. Contemporary machines can handle 10^15 bits, but it's the cruising
speed that's the problem. A Cray-2 (c. 1986) could do one gigaFLOP, by 1990 we
had 10 gigaFLOP machines. In 1992, Thinking Machines had a super-computer that
ran at 100 gigaFLOPS. So-called ultra computers could clock two teraFLOPS just a
year later. With current exponential industry advances, a 1000 teraFLOP machine
should be here by 2000 or so, and a 10^17 at 100,000 teraFLOPS model wouldn't be
too far behind. This bodes well for the Strong AI Postulate and development of
an exosomatic brain emulator.
A Bicameral Mind or Deus Ex Machina?
Julian Jaynes presented the bicameral mind as the converse of our "own
subjective conscious minds." It is true that the Emmortality Program is not
conscious. I echo Jaynes findings that consciousness is not the sine qua
non for reconciling experience, or for concept formation, learning, reason,
or even thinking. So, then, does the Emmortality Program need consciousness? Not
at all. Can a machine or a program be "conscious"? John Searle says "no." I say
Of course, there is the risk of the Emmortality Program being better
than the domain expert/user. It could evolve into being more empathic and
interested in others, wiser, etc. It would certainly be smarter, since it
started off that way. But intelligence is not what makes us human, it is not a
uniquely human quality. But, then, what is? Some argue consciousness is. So, if
the Emmortality Program or another such system was "asked" if it was conscious
and it said "yes" how would (or could) one argue it was wrong? The thrill - or
horror - of a machine or program thought to be conscious is likely rooted in
humanity's narcissistic habit of anthropomorphizing. As can be documented, this
dates back to Darwin, Titchener, and Binet, to books such as The Psychic Life
of Micro-Organisms and others. Well-intentioned but misplaced sympathies
cause projections of consciousness upon everything from protozoa to earthworms.
Hardison notes how we do the same with computers. Doesn't a computer
use language in order to reason logically? It understands C++ and Fortran. It
has memory (both short-term [RAM] and long-term [ROM]). It plays games and can
talk. A computer can get a virus as well as be inoculated. We can now move from
jingoistic anthropomorphic metaphors to reality. Maybe the human-mechanistic
metaphor is a two-way street. Bi-directionally, one could accurately equate a
machine with the body, an information-processing device with the brain, and a
program with intelligence or thinking processes. (Aren't bad habits for which we
wish to claim no volitional responsibility "hardwired" in?)
Farmer argues that even pure science is anthropomorphic, "when you
jump from the Ptolemaic view to the Copernican view of the solar system, you've
taken a small step toward making our human view of the universe less
anthropomorphic... (but) when we assign magical properties to ourselves, such as
intelligence, that we refuse to assign to something else, then I think that as
we are confronted with things that are overtly intelligent we will have to begin
to accept that they are intelligent." Perhaps this may be the contemporary
Copernican shift of humans from the center of a metaphorical universe.
In Understanding Computers and Cognition, Winograd and Flores
postulate that computers have an inherent Heideggerian blindness. That is,
computers are unable to see the wide spectrum of stimuli that we humans can. It
echoes another human-centric philosopher's contention that computers "only deal
with facts, but man - the source of facts - is not a fact or set of facts, but a
being who creates himself and the world of facts in the process of living in the
world" (Hubert Dreyfus). Arguments against to these myopic complaints of
cybernetic blindness are:
1) Is such blindness "bad"? Oedipus had to put out his eyes in order
to truly see. "Blindness" brings with it decreased distraction as well as
insulation from accidental illusion, or purposeful sleight-of-hand. Psychology's
school of operationalism would offer that experience/conditioning is the road to
2) If human abilities are used as a yardstick, then humans are bound
to mis-measure and thus mis-perceive or misunderstand silicon (NETtalk is a good
example of this), and;
3) As Hardison puts it "the problem is not what computers are in some
Platonic sense but how they are perceived, which is closely related to how they
are incorporated into the web of human culture" (p. 323).
This leads to another concern: semantics versus syntax. Such is the
crux of John Searle's argumentative example of the Chinese Room, in that
following certain procedures can produce correct results, but doing so does not
prove knowledge or learning, and most certainly not consciousness.
Certainly a computer does not know what "mom" means in the same way a
person recalls the meaning of the term. Thus, the computer has the syntax down
pat but is void of semantics. Good prose doesn't equal good poetry is Searle's
point. It parallels Winograd and Flores's blindness concept. Hardison's solution
to this supposed dilemma "...is interesting because we have no way of knowing
about the subjectivity of anything except by what we observe. If somebody said,
'I am conscious,' and you replied, 'I can't prove you are not conscious, but I
know you are unconscious anyway,' your attitude would seem a bit churlish. Since
you honestly don't know what is going on in the head of the person who says, 'I
am conscious,' you have to take the person's word for it. Who, after all, knows
better than the person whether or not he is conscious? Who knows, really,
whether anybody is conscious in Searle's sense? Who knows what thought is?
Perhaps consciousness is a matter of procedures - a syntax - and semantics is an
illusion created by the syntax" (p. 331). Perhaps this is Searle's attempt to
put a new spin on credo quia impossible?
So what's "better" and who's to say? If a computer is blind (a la
Winograd and Flores) to what it is to be human, we are similarly impaired when
it comes to an empathic understanding of what it means to be silicon. We may
never know if a machine is conscious in the way we conceptualize it and in the
way we perceive/believe our senses to be, and vice-versa. Maybe it is simply "a
semantic quibble: machines cannot acquire human abilities because they are
The Problematic Effects of Accuracy
Cognitive psychology has long noted the fact that humans rarely
receive data at 100% accuracy. When various types of noise get in the way -
especially emotional contaminants - it is referred to as an "apperception." At
this non-emotive stage of computer evolution such would be absent. But this lack
may be problematic, as it would subtract a helpful human-like realism. Sycara's
work may add some helpful connections with some fabricated noise, and perhaps
along with non-entropic chaos or even digitized apperceptions.
It is unusual to think of having accurate data/memory as a problem.
Furthermore, our brains likely do not store memories or all that it remembers.
The human body is constantly deteriorating and being renewed. Every 10 to 23
seconds our personal, subatomic quarks and gluons (the stuff of neutrons and
protons - which make up one's atoms) "die" and are replaced. Aging is a cruel
example of the inexactness of such replications. And brain cells don't even
replicate. "What probably happens (with human memory) is that any part of a
memory is stored and recall of the entire memory involves the inverse of data
compression techniques: the memory is 'flushed' out. Such a storage mechanism is
probabilistic; errors can be made in this process, so perfectly sane people
occasionally 'remember' events that never occurred" (Tipler, p. 237). I would
offer that probabilistic occurrences can be coded into the Emmortality Program
quite easily. Randomizers would not be difficult to introduce.
Turing suggested that the inclusion of "random elements" would be a
necessary aid in the development of a computational machine that would pass his
test. As Tipler put it, "...although deterministic algorithms may exist to solve
a problem, often these require such an enormous amount of computer capacity that
systematic 'guessing' - making choices among equally weighted possibilities at
random - is almost always more efficient" (p. 194). This has led to the concept
of "heuristic programming."
In human decision-making random selection may be most efficient due to
the "information cost" phenomenon. That is, it may take an inordinate amount of
resources to determine a solution by exhaustively examining all relevant data
prior to arriving at a solution. Thus avoiding the Buridan's Ass dilemma in that
a random choice is better than indecision. Game theories would also tend to
support this. Heuristic programs themselves use a pseudo-random number generator
to act then as a randomizer. [A pseudo-random number is produced by a
deterministic algorithm, but it is so complex that one cannot tell the
difference between a pseudo-random number and an actual random number.]
Quantum non-locality (aka the No Clone Theorem) would prevent the
Emmortality Program from true human emulation because of its inability to mimic
or download quantum mechanically entwined human relationships and such inputs.
Many assume that human life should not be considered a quantum state. But my
point is that the Emmortality Program is a mimicked version of its user, not the
user her or himself. (The Bekenstein Bound would actually support an even more
radical postulate [a la Tipler] that "...using computer memory capability of the
amount indicated by the Bekenstein Bound, a computer simulation of a person, a
planet, [or even] a visible universe will not merely be very good, it will be
perfect [emphasis in original], it will be an emulation" (p. 223). [It is beyond
the limitations of this article to venture off into discussions about emulated
quarks and their capability to reconstitute ontological free will, as has been
Forgetting can be quite therapeutic for humans in certain
circumstances. It can be a powerful analgesic (if not opiate). It can help free
people from non-productive fixations. The paradox is being able to choose what
is forgotten. The Emmortality Program then raises a new cosmological conundrum:
Is it desirable to keep interacting with the dead? Psychics and mediums have
supposedly been doing it for centuries. There are possible concerns that merit
examination, but the occult will be omitted.
Certainly monuments, mausoleums, Indian burial mounds, memorial wings
in buildings, endowments, various statues, and other such technologies and
symbols act to preserve the memory of the dead. So do paintings, audio and video
recordings, photographs, and letters. Perhaps the closure and finality of death
is not so absolute as one may think within many cultures. Certainly, Judaic,
Christian, and Islamic traditions would concur.
It is Ryan's opinion regarding the examination of such issues
vis-a-vis video "...that only in the context of... a new (cosmology) can we
invent stable (and suitable) rituals that allow us to replay the dead
live...Given the flexibility of electronic information technologies, we have the
possibility of flexing the story in a non-narrative way that avoids the patterns
of dominance associated with egocentric 'master narratives.' Another way of
saying this is that we encode cosmology in a way that it is sensitive to chaos
and responsive to local knowledge" (pp. 225-226).
Computers are simulators - they simulate typewriters, musical
instruments, drawing boards, etc. A simulation, digitally speaking, is a model
of bits arranged in a pattern that mimics the object/ procedure in question.
This arranged code yields what one recognizes as a program. Running a program is
analogous to putting the model into action (e.g., typing with a word processor).
An emulation is a perfectly modeled simulation of space or task. The Bekenstein
Bound would support that, with adequate computer power, a person (or at least
the mimicked bits that make up a person) could indeed be emulated. But one need
not go to this extreme as simulated levels would be quite adequate.
Hypothetically, if there were a working Emmortality Program - at an
emulation level, would an emulated mind then "exist"? That's for Descartes to
determine. Such scenarios conjure philosophic conundrums, such as "how does one
know he/she is not already an emulation?". Leibniz may offer help in this
matter, via his "Identity of Indiscernibles" rule. That is, "...entities which
cannot be distinguished by any means whatsoever, even in principle, at any time
in the past, present, and future have to be considered identical" (p. 208).
Was Hans Moravec right? Will humans disappear into these machines,
perhaps via the route offered by an Emmortality Program? A better
conceptualization is that humans will reappear out of such machines. Silicon is
already immortal. Perhaps it is time we move outside of ourselves to see what we
can learn from it.
Ballard, J.G., "Project for a glossary of the Twentieth Century", in J. Crary
& K. Winter, eds., Incorporations. New York: Zone, 1992.
Dreyfus, H.L., What Computers Can't Do: A Critique of Artificial
Reason. New York: Harper & Row, 1972.
Hardison, O.B., Disappearing Through the Skylight. New York: Viking,
Minsky, M., The Society of Mind. New York: Simon & Schuster, 1986.
Stone, A.R., "Virtual Systems", in J. Crary & K. Winter, eds.,
Incorporations. New York: Zone, 1992.
Tipler, F.J., The Physics of Immortality. New York: Doubleday, 1994.
Wiener, N., Cybernetics: Control and Communication in the Animal and the
Machine. Boston: MIT Press, 1948.
Chris E. Stout, Psy.D., MBA, is a clinical psychologist,
entrepreneur, and adventurer.
© CTheory. All Rights Reserved