Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Brain, Mind, and the Structure of Reality

Hughes, Vita-More, de Grey, Roux @ TransVision 2014

How America’s Obsession With Bad Birth Control Hurts and Even Kills Women

A decade of uncertainty in nanoscale science and engineering

Longevity Gene Therapy – Updated Projects

Does Religion Cause More Harm than Good? Brits Say Yes. Here’s Why They May be Right.


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

Peter Wicks on 'Pastor-Turned-Atheist Coaches Secular Church Start-Ups' (Nov 21, 2014)

instamatic on 'Pastor-Turned-Atheist Coaches Secular Church Start-Ups' (Nov 20, 2014)

Peter Wicks on 'Pastor-Turned-Atheist Coaches Secular Church Start-Ups' (Nov 20, 2014)

instamatic on 'Pastor-Turned-Atheist Coaches Secular Church Start-Ups' (Nov 20, 2014)

Michael Nuschke on 'What is Technoprogressivism?' (Nov 19, 2014)

@andy00778 on 'Does Religion Cause More Harm than Good? Brits Say Yes. Here’s Why They May be Right.' (Nov 19, 2014)

Peter Wicks on 'Pastor-Turned-Atheist Coaches Secular Church Start-Ups' (Nov 19, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Does Religion Cause More Harm than Good? Brits Say Yes. Here’s Why They May be Right.
Nov 18, 2014
(18296) Hits
(1) Comments

Why Running Simulations May Mean the End is Near
Nov 3, 2014
(18086) Hits
(13) Comments

2040’s America will be like 1840’s Britain, with robots?
Oct 26, 2014
(14380) Hits
(33) Comments

Decentralized Money: Bitcoin 1.0, 2.0, and 3.0
Nov 10, 2014
(8450) Hits
(1) Comments



IEET > Rights > Personhood > Life > Enablement > Vision > Bioculture > Virtuality > Former > Ben Hyink

Print Email permalink (9) Comments (7789) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Uploading for Life Extension Will Be Valid


Ben Hyink
By Ben Hyink
Ethical Technology

Posted: Mar 30, 2010

While it may be impolitic now for technoprogressives to focus on uploading, for radical life extension advocates it is invaluable to have access to brief and compelling arguments in favor of the efficacy of such a process.

It is a process that will be necessary to enable people to live longer than an average of 150 years when they are increasingly likely to die from random accidents encountered in a normal lifestyle. In the past I have made arguments against the feasibility of some forms of uploading as a means of life extension. I would like to correct that error by providing a line of argument in favor of uploading as a life extension measure, though the life that is extended may change dramatically with a dilution of the self into a larger group of minds.

Is Consciousness and the Self Bound to Biology?

Multiple types of scientific evidence point by consistent correlation to matter, and particularly matter in neurophysiological states, as the basis or substrate of cognitive processes and conscious experience. While in philosophy of mind we cannot completely rule out the possibility that such correlation is illusory or that the substrate of thought and experience is something other than the observable brain, there seems to be no compelling evidence-based reason to doubt that our observations of brain activity are observations of the locus of thought and experience.

A decent materialist description of consciousness based on convergent lines of scientific evidence is that its basis or substrate is matter that is disposed to bear a meaningful relationship to something else based on its orientations to other matter in a cognitive system, orientations which allow data to be interpreted and computations to be processed by algorithms (a finite set of well-defined instructions for completing a task). The meaningful interpretations and calculations - whether unconscious or accessible in consciousness - apply either to a cognitive system’s observable environment or its models of internal system processes (“self-models”).

The meaning of such algorithmic “dispositional orientations” seems contextually determined by the neural networks where they occur and the weightings of connected neural networks through which signals are passed and interpretations of signals are performed. They also are transient in that brain activation cascades briefly across particular neural networks, that is, across matter in the system.

Clearly, even during states of wakefulness (ignoring the long lapses in consciousness during sleep), the subject of experience is not identifiable as specific matter that remains the physical locus of experience. The location and “physical identity” of the subject of experience appears to be constantly changing at the micro-biological level, without even considering the progressively mind-bending attributes of matter and energy at the molecular, atomic, and quantum levels of physical reality (as opposed to an unchanging set of atoms or quanta as the physical locus of experience).

The processing of the cognitive system as a whole is what enables holistic interpretations and perceptions to be experienced by transiently selected matter in the system. The actual mechanisms of conscious processing remain in dispute, and there are many questions for which we currently lack conclusive answers. However, the theories and evidence suggest that there exist no insurmountable obstacles to uploading relevant information, algorithms , and even neurophysiological states.

Some convergent lines of research point to co-activated areas of transiently stable oscillations. In connectionist terms from Gerard O’Brien and Jon Opie at the University of Adelaide, the kind of relationship of transiently stable neural networks to their environment has been described as a “second order resemblance relation.” Second order resemblance is when the modeled relations and characteristics of a represented thing are preserved but not in the same format, like a flat topographical map or a transiently reverberating neural network interpreting part of a visual scene or parsing a sentence. The virtue of their perspective is that it describes specific neurophysiological states that could be the basis for dispositional orientations or how neural interrelations are meaningful based only on the activity of the “vehicles of consciousness” or neural network activity. The problem with this perspective is that it currently seems impossible to test.

Dennett has criticized this “pure vehicle” view of consciousness for several reasons [1] including questioning the assumption that any transiently stable activation pattern must represent a perceptual experience. Yet even if we were forced to consider the whole brain at any given time as the subject of experience, the whole can still be reduced to a fluid mix of atoms or even quanta that “perceive” en masse based on their temporary existence within a cognitive system.

Whether or not the subject of experience can be located (transiently) as distinct areas of the brain, the “self” as an entity that emerges from the brain as a whole and over longer time spans than a moment of perception (and certainly not the result of information converging to one point, which would severely constrain information processing capacity). Daniel Dennett describes the self as a “narrative center of gravity,” reflecting its fuzzy nature. William Sims Bainbridge has even created a “personality capture system” [2] that models the personality absent of information on neural structure and connectivity. Martine Rothblatt has done work to establish the existence of “mindfiles” [3], “mindware” [4] and “mindclones” [5] also without neurophysiological data.

Given this potential for deconstruction of the self, maybe it isn’t so strange to think of radical life extension, with and without uploading, as keeping the world populated with people like ourselves [6] - the same motive for having children to pass on genes or writing books to pass on “memes” (or cultural artifacts including ideas). Still, most people considering radical life extension would prefer the closest resemblance to their mind as possible in an upload, and that will involve detailed brain emulation.

Beta-Tests for an Uploading System

An uploading system can be tested using the original to verify the process, inasmuch as such a process can be verified by observation. It isn’t perfect, but it is about as good a verification method as we could hope to find. Three component processes would be involved in the test: integration, growth, and functionality shifting. [7]

  • Integration: connections are established to a simulated system that mimics brain structures and activities via a perfusion of wireless interface materials (the simulation might include things like hormone interactions or, alternatively, it might not attempt to model biology in a close way). In his book The Singularity is Near, Ray Kurzweil describes this step as a nano-neural network which could facilitate a robust virtual reality experience. A contemporary example of integration is the way current brain chips interface through biological material to neurons. Now imagine the chip as an interface to a functional simulation of the biological brain that is supported on what today is called a supercomputer (remember, the subject of conscious experience is just matter processing information in a cognitive system, regardless of whether or not the system is biological).
  • Growth: expansion of synaptic circuits into the new system in a functional way, i.e. new connections between the brain and model that serve new or existing functions. Think of the way neural networks adapt to a brain chip, only imagine the growth continuing on the other side of that brain chip interface - that the biological brain “grows into” or alters the neural firing and wiring patterns in the detailed simulation of it and affects the functioning of the simulated brain.
  • Functionality Shifting: a gradual shift toward reliance on new network portions in the model for exercising an increasing variety of activities and capacities. Visualize the biological brain gradually losing its status as the primary source for brain activity and it becoming increasingly controlled by the simulated brain, to the point of shared control or even domination by the simulated brain. Domination would make sense given that the simulated brain could be altered to enhance processing speed and bandwidth, short- and long-term memory capacity, general intelligence potential, and augmentation via new modules that process software and literally download and transmit complex information including knowledge or crystallized intelligence.

Sensorial, motor, and eventually higher faculty functions could be demonstrated to be effectively supported via a simulation through double-blind tests of observation, movement, and intellectual processing. In the first case, access to a visual scene in a different room could be the proof as long as processing was primarily done with the simulated brain. In the second case, movement of a mechanized arm in another room could be proof with the same condition. Last, higher mental function processing could be demonstrated to occur primarily in the copy brain yet the conclusions reached would be accessible to the original brain.

Once a prototype system is demonstrated to work successfully there would be no rational basis for claiming that hard-uploading or destructive uploading using the same process would not “work” to transfer one’s information theoretic and subjective identity to a new substrate.

The second parallel processing or dominating simulation of a brain would ensure that if the original brain were to die, an enhanced copy would live on from the point of death forward. Moreover, multiple parallel processing units could be operating at various locations to protect against threats such as natural disasters and on a regular basis static back-up copies also could be saved and stored in highly secure locations. Additionally, as minds become increasingly like software and less like neural networks, components of a person’s mind and not just her ideas could be stored in the minds of a multitude of other augmented uploads.

What Kind of Existence for Uploaded Minds?

In Robin Hanson’s “If Uploads Come First,” a 1994 paper with an in-depth analysis of uploading economics and culture, he explores potential dynamics of uploading if uploads are individualistic entities in a highly competitive environment as well as if they have an employee status to the person in the original body. [8] Yet with sufficiently advanced neurological implants it should become possible to communicate directly from one mind to another mind using signals from internal vocalization and auditory processing areas of the brain. In the case of uploads there would be fewer barriers to mental telepathy, especially given the relative ease of testing and adaptation. As minds communicate more fluidly and freely, the emergence of collective minds may result.

With the emergence of collective minds, there is the possibility that a “hive mind” or “collective consciousness” might appear in which the values and opinions of the group trump those of the individual (only more thoroughly than in collectivistic cultures). In such a scenario personal identity could largely dissipate into group identity. Since such a pattern of association reduces the likelihood of vigorous debate, it may pose a challenge to the continued emergence of rational thought and some types of progress - at least from within any given hive mind. [9] However, hive minds might provide some measure of collective security against reckless or brutish behavior by super-powerful uploads if they cannot effectively be constrained by laws and weapons used against human bodies. The Borg of the Star Trek universe represented a dystopian cyborg version of a hive mind. However, as Hanson has pointed out on the “Overcoming Bias” blog, as people become more knowledgeable and intelligent we should expect to see them reach agreement on an increasing number of topics that have a more factually substantive basis than taste alone.

Finally, one or more hive mind that was sufficiently large and powerful could become a god-like collective. Such a scenario seemed to be explored with the creature toward the end of the film Star Trek V: The Final Frontier and with the “star-child” at the very end of the film 2001: A Space Odyssey.  If and when such a being emerges, we can hope it will incorporate - or choose to adopt - a “friendliness” utility function (an elemental motivational drive, like pleasure and pain and emotional biases) for the sake of non-uploaded people and for individualistic (and likely less powerful) uploads.  The Singularity Institute for Artificial Intelligence is considering basing this feature on “coherent extrapolated volition” or an accurate estimation of what people would want if they had access to all relevant information and used rational methods to make assessments [10], though its utility function is being designed primarily for non-human artificial general intelligence systems that can bootstrap themselves to a super-powerful status. A friendliness utility function will not be an easy goal to achieve, but it is an eminently worthy one for humanity to strive to realize, and one which may determine the nature of god-like minds of the future.

Men rarely (if ever) manage to dream up a god superior to themselves. Most gods have the manners and morals of a spoiled child. - Robert A. Heinlein


1. Dennett, D.C., and C.F. Westbury, Stability Is Not Intrinsic: Commentary on O’Brien and Opie, A Connectionist theory of Phenomenal Experience.
2. Bainbridge, William Sims. Self-Analysis and Preservation Software for Windows. 23 Mar. 2010.
3. Rothblatt, Martine. What are Mindfiles?. 2 April 2009. 11 December 2010.
4. Rothblatt, Martine. What is Mindware?. 9 April 2009. 11 December 2010.
5. Rothblatt, Martine. What are Mindclones?. 4 May 2009. 11 December 2010.
6. Thanks to Robin Hanson for that perspective.
7. Hyink, Ben. Toward Non-fatal Uploading: A New Framework. 8 August 2004. Online PowerPoint. Uploading and Immortality. 23 March 2010.
8. Hanson, Robin. If Uploads Come First: The Crack of a Future Dawn. 8 March 1994. Reprinted with permission from Extropy 6:2 (1994). 23 March 2010.
9. Wikipedia. Hive mind 11 December 2010
10. Singularity Institute for Artificial Intelligence. Research Area 3: AGI Ethical Issues. 2010. 23 March 2010.


Ben Hyink was a passionate transhumanist activist and an intern with the IEET. He helped organize and lead the Humanity+ Student Network (H+SN), co-wrote the “Humanity+ Student Leadership Guide," and was the recipient of the 2007 JBS Haldane award for outstanding Transhumanist Student of the year.
Print Email permalink (9) Comments (7790) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Great article Ben, thanks.

Re “it may be impolitic now for technoprogressives to focus on uploading”: we don’t focus on uploading here—it is just one of the many interesting things discussed on the IEET blog.





Thank you, Giulio.

A more positive image of an egalitarian collectivistic technological society might be seen among the archeologists at the end of the Spielburg/Kubrick film “A.I.,” though they do decide to euthanize a robot child they would rather not babysit or devote resources to “raise” him to their level of maturity.





Talking Points:

(Yes, it is April Fool s day, but I hope you take these points seriously.)
 
1.  Uploading is a significant topic to radical life extension because even if we cured aging people would die from accidents living as we do by an average of 150 years. It is relevant to any discussion of 1,000 year life spans.
 
2.  Consistent correlation of brain activity and mental processes strongly indicates that our minds are matter in neurophysiological states. There is no strong evidence-based reason to doubt that.
 
3.  Convergent lines of research say that matter gains cognitive properties based on its appearance within a cognitive system, and this system is emergent from and reducible to matter.
 
4.  The meaningful relationships matter bears to other matter in a cognitive system is based on orientations of matter to other matter that facilitate algorithms : a finite set of instructions for completing a task, including data interpretation or calculation.
 
5.  In the brain such orientations are facilitated by neural networks and functional modules that operate when transiently focused through spreading activation.
 
6.  The material “subject of experience” is constantly changing due to spreading activation.

7.  The subject of experience is constantly changing due to the flux of matter at the molecular, atomic and quantum levels of reality (as opposed to a scenario in which they all remained the same and were continuously activated).

8.  There are long discontinuities during sleep and unconsciousness when the subject of experience does not exist. Sleep can be thought of as “little slices of death” (reference to “Journey to the Center of the Earth”).

9.  A connectionist perspective on the subject of experience is that it can be defined as transiently stable activations model external and internal processes through second order resemblance relations, analogous to the way a flat topographical map can model a mountain range. Unfortunately, there currently seems to be no good test for that perspective.

10.  However, even if we think of the subject of experience as a whole unit, talking point numbers 6 and 7 still apply.

11.  The self is a “fuzzy” entity that emerges from the whole brain over longer time spans than a moment of perception. Dennett calls it the “narrative center of gravity.”

12.  The self as communicable thoughts, feelings and dispositions can be captured via programs available today.

13.  To the skeptic who insists the upload will be a different person, uploading can be thought of as keeping people somewhat like ourselves in the world, just as people do by having children to spread genes and writing books to spread memes.

14.  From the perspective of your uploaded self, even a discontinuous process will seem like waking up from the discontinuity of sleep or even sustaining (discontinuously facilitated) ongoing perception.

15.  However, the most robust and accurate reproductions : or continuations : of the self would result from copies of one’s neurophysiology via detailed brain emulation.

16.  Uploading is compatible with the belief in a non-material soul just like sleep is compatible with the belief in a soul. Either it can disconnect and reconnect to the material mind or it remains connected in a way that cannot be detected by science.
 
17.  Uploading could be beta-tested via subjective reports as one person or more undergo a process of brain interface integration, functional growth into a new substrate and functionality shifting into a new substrate.

18.  After it is shown that the uploaded people always report continuing to feel alive and can pass tests demonstrating perception, motor function, and higher cognitive functions, there is no rational basis for arguing that a less continuous process would not “work” to facilitate a living mind on a different substrate than a biological brain (technically, there would be no rational basis because reason is a communal endeavor and human minds need to learn how to reason in an abstract way). Still, some might choose to pursue the gradual route, which might be more expensive.
 
19.  A functional brain simulation running parallel to or dominating the biological brain would ensure that a person would live on if their biological body were to die in an accident. Additional security measures might include parallel processing in multiple secure locations and regular saving of “back-up copies.”

20.  Life as an upload will become radically different from life as a biological human, but there still could be great continuity of experience along the way and much to gain in terms of capabilities and experiences, including learning more comprehensively what others know and entering the perceptual experiences of other persons, non-human life forms, and machines.

21.  Given that uploads or artificial general intelligence systems could become super-powerful, uncontrollable by laws or threats of bodily harm, it is important to support the Singularity Institute for Artificial Intelligence (SIAI) in its quest to develop a human-friendly utility function (a basic motivational drive like pleasure and pain or emotions in humans, but based on something like “coherent extrapolated volition” or what we would want if we knew as much as is accessible about a situation).

22.  Take as a cautionary observation Robert A. Heinlein’s quote, “Men rarely (if ever) manage to dream up a god superior to themselves. Most gods have the manners and morals of a spoiled child.” That, and inhumane but deeply-entrenched contemporary human systems, is why we need a good and effective friendly utility function.





“However, as Hanson has pointed out on the “Overcoming Bias” blog, as people become more knowledgeable and intelligent we should expect to see them reach agreement on an increasing number of topics that have a more factually substantive basis than taste alone.”

This claim is based on a mathematical theorem called Aumann agreement.  It makes many hidden invalid assumptions about people, including that they all have the same experiences and the same primitive values, that they all use the same mental representations, and that they are all perfect reasoners using the same logic.  It has no correlation with reality.





Bravo, Ben.

In my book ‘The Humanist’ defrocked Jesuits maintain our genome, a sophisticated DNA sample including the epigenetic sheath, within a cloud repository, on one premise - that our identity rides with that pattern.

That has always been the key question for me, intellectually. Are identical twins, e.g. the same people (2 phenotypes) printed off a common genotype. What is necessary to identity and what is not? (I disavow memories.)





Thanks, Dwight.

Phil, it may have been a misunderstanding on my part that overextended the Aumann agreement. I remember Robin was assuming that sufficiently intelligent people would be Bayesians and I think he premised it by saying they agreed on their priors or pre-priors.

I really enjoyed reading your essay, “Exterminating intelligence is rational.” (http://lesswrong.com/lw/108/exterminating_life_is_rational/ ) Its content was very relevant to the ideas I mentioned in this article.





Correction to the talking points:

“10. However, even if we think of the subject of experience as a whole unit, talking point numbers 6 and 7 still apply.”

Number 10 should refer to “numbers 7 and 8” instead of 6 and 7.





And in number 10 the whole unit I was referring to was the whole brain…





Additional commentary relevant to this issue can be found in Ben Goertzel’s “A Cosmist Manifesto,” specifically in the following entry:

Uploading, a No-Brainer
http://cosmistmanifesto.blogspot.com/2009/01/what-about-uploading.html

Relevant to that blog entry is Goertzel’s entry on relations of the patterns that constitute reality, based on work by American philosopher Charles Peirce:
http://cosmistmanifesto.blogspot.com/2009/01/first-second-third.html

In a previous article on personhood theory, I referred to Neo-Kantian interpretations of mind (described as an “abstract-level functionalism”). On such interpretations, Goertzel’s “secondness” or “reaction” might be interpreted as mere unpatterned interaction that is non-reflective, while “thirdness” or “relationship” could include non-reflective patterned interaction as well as reflective patterns of interaction, including cognition.

It is important to consider that even deep meditative trances are only experienced as “firstness” or “pure being” via reflective patterned relationships in the realm of “thirdness” (cognitive and non-cognitive models of reality including the self and its environment, such as the act of breathing or the act of concentrating on nothing). Non-thirdness experiences of firstness would be the absence of patterned mental relationship “experienced” (or rather the lack of experience) in deep sleep or unconsciousness.

At an abstract level, reflective consciousness would require “acts of judgment” on “sensory appearances,” even though such functions may be carried out via the same representational AND interpretive neural network activity (e.g. in transiently stable oscillations).





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Moral by definition? (Some slightly technical philosophy.)

Previous entry: Getting to Know Kyle Munkittrick

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376