Institute for Ethics and Emerging Technologies

The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.

Search the IEET
Subscribe and Contribute to:

Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view

whats new at ieet

Mens Health Week: One Doctor Thinks We Should Be Talking about Better Birth Control for Guys

This scientist makes ears out of apples

Gravitational Waves: The Universe’s Subtle Soundtrack

Imagining the Anthropocene

Color and Sound Perception Explained

Transhumanisme : Comment sortir de la « Vallée de l’étrange » ? 2/2

ieet books

Philosophical Ethics: Theory and Practice
John G Messerly


GamerFromJump on 'IEET Affiliate Scholar John Danaher Interviewed by Futurezone' (Jun 27, 2016)

instamatic on 'WHAT MORAL ENHANCEMENT?' (Jun 22, 2016)

Giulio Prisco on 'Paradiso and Inferno in Robin Hanson's 'The Age of EM'' (Jun 22, 2016)

Giulio Prisco on 'Paradiso and Inferno in Robin Hanson's 'The Age of EM'' (Jun 22, 2016)

Pastor_Alex on 'WHAT MORAL ENHANCEMENT?' (Jun 20, 2016)

Lincoln Cannon on 'The Semi-Orthogonality Thesis - examining Nick Bostrom’s ideas on intelligent purpose' (Jun 18, 2016)

rms on 'Does Self-Tracking Promote Autonomy? An Initial Argument' (Jun 17, 2016)

Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List


Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month

Will Transhumanism Change Racism in the Future?
Jun 2, 2016
(5459) Hits
(0) Comments

Dubai Is Building the World’s Largest Concentrated Solar Power Plant
Jun 20, 2016
(3914) Hits
(0) Comments

A snapshot on the fight against death
Jun 1, 2016
(3414) Hits
(4) Comments

New Evidence Suggests a Fifth Fundamental Force of Nature
Jun 13, 2016
(3322) Hits
(0) Comments

IEET > Security > Military > Rights > Neuroethics > Life > Innovation > Health > Vision > Technoprogressivism > Fellows > David Pearce

Print Email permalink (28) Comments (14310) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

Is Humanity Accelerating Towards… Apocalypse? or Utopia?

David Pearce
By David Pearce
Ethical Technology

Posted: Jun 21, 2012

Will the future provide us with a genetically preprogrammed blissful paradise, or a global catastrophe? Will there be cessation of all suffering, or annihilation of all sentient life?

The history of futurology is not encouraging. Most “predictions” by futurists are more akin to prophecies that reveal more about personality, preoccupations and capacity for wish-fulfillment of the author than the future they purport to describe.

In any case, predicting the future behaviour of self-reflexive agents is not like predicting the behaviour of non-intelligent physical systems. Some predictions are self-fulfilling; other predictions are self-stultifying; and the public forecasts of politicians, social scientists, singularitarians and transhumanists should all be viewed in this light.

With this in mind, here goes….

Existential risk?

I think the greatest underlying source of existential and global catastrophic risk lies in male human primates doing what evolution “designed” male human primates to do, namely wage war. [1] Unfortunately, we now have thermonuclear weapons to do so.

Bad news?  I fear we’re sleepwalking towards the abyss. Some of the trillions of dollars of weaponry we’re stockpiling designed to kill and maim rival humans will be used in armed conflict between nation states. Tens of millions and possibly hundreds of millions of people may perish in thermonuclear war. Multiple possible flash-points exist. I don’t know if global catastrophe can be averted. For evolutionary reasons, male humans are biologically primed for competition and violence. Perhaps the least sociologically implausible prevention-measure would be a voluntary transfer of the monopoly of violence currently claimed by state actors to the United Nations. But I wouldn’t count on any such transfer of power this side of Armageddon.

Good news?

I probably sound a naive optimist. I anticipate a future of paradise engineering. One species of recursively self-improving organic robot is poised to master its own genetic source code and bootstrap its way to full-spectrum superintelligence. The biology of suffering, aging and disease will shortly pass into history. A future discipline of compassionate biology will replace conservation biology. Our descendants will be animated by gradients of genetically preprogrammed bliss orders of magnitude richer than anything physiologically accessible today. A few centuries hence, no experience below “hedonic zero” will pollute our forward light-cone.

Freeman Dyson prophesies that soon we’ll “be writing genomes as fluently as Blake and Byron wrote verses”. If so, I’m not sure about timescales. However, “narrow” artificial intelligence and powerful gene-authoring software tools will shortly enable humans to edit our own genetic source code in accelerating cycles of recursive self-improvement. In consequence, human intelligence will be progressively amplified and enriched. Youth, vitality and lifespans will be extended indefinitely. Suffering, depression and experience below “hedonic zero” will be relegated to history. Human traits such as weakness of will, the struggle for meaning and significance, quasi-sociopathic empathy deficits, and a host of mediocre states of mind that currently pass for mental health will increasingly become optional as we bootstrap our way to posthumanity.

Not least, a growing mastery of our biological reward circuitry will allow the upper bounds of human “peak experiences” to be pushed unimaginably higher. Likewise, hedonic set-points can be genetically recalibrated. Everyday life later this century will potentially be animated by gradients of intelligent bliss.

Bioconservative critics will doubtless worry that “something valuable will be lost” when responsible prospective parents stop playing genetic roulette as the reproductive revolution of “designer babies” unfolds. Tomorrow’s parents-to-be will opt for preimplantation genetic diagnosis and “designer zygotes” to ensure invincible physical and mental health for their future children. Among young adults, novel states of consciousness as different as waking from dreaming are likely to migrate from psychedelic chemists working in the scientific counterculture to mainstream society. “Bad trips” will become physiologically impossible because their molecular signature is absent.

Unfortunately, words fail here. Post-Darwinian consciousness is likely to be incomprehensible to archaic Homo sapiens.

Ethically, I think the greatest ethical change ahead this century may be the antispeciest revolution. This global transition will probably follow rather than precede the commercialisation of gourmet in vitro meat and the end of factory farming and the death factories. It’s worth stressing that the antispeciesist doesn’t claim members of all species are of equal value. S/he argues simply that beings of equivalent sentience are of equal value. Hence they deserve to be treated accordingly - regardless of gender, race or species.

Pigs, sheep and cows are of equivalent sentience to human infants, prelinguistic toddlers, victims of Alzheimer’s disease and the severely intellectually handicapped. Only arbitrary anthropocentric bias leads us to kill, abuse and exploit the former and care for the latter. Despite superior intelligence, I suspect our grandchildren may struggle to comprehend what their grandparents did to other sentient beings.

AGI? Ben Goertzel has projected this timeline for AGI development: 

2023—human-level AGI
2026—imposition of global AGI Nanny to ward off existential risks
2030—Singularity, managed by the AGI Nanny

Well, I’d argue [that AGI] is a form of anthropomorphic projection on our part to ascribe intelligence or mind to digital computers. Believers in digital sentience, let alone digital (super)intelligence, need to explain Moravec’s paradox. [2]

For sure, digital computers can be used to model everything from the weather to the Big Bang to thermonuclear reactions. Yet why is, say, a bumble bee more successful in navigating its environment in open-field contexts than the most advanced artificial robot the Pentagon can build today? The evolutionary success of biological lifeforms since the Cambrian Explosion has turned on the computational capacity of organic robots to solve the binding problem [3] and generate cross-morally matched, real-time simulations of the mind-independent world.

On theoretical grounds, I predict classical digital computers will never be capable of generating unitary phenomenal minds, unitary selves or unitary virtual worlds. In short, digital computers are invincibly ignorant zombies. [4] By their very nature, they can never “wake up” and explore the manifold varieties of sentience.






Print Email permalink (28) Comments (14311) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


People who believe they will witness the “end of times” in their own life are delusional. In a way, to think we are the last generation of humans on Earth is giving ourselves too much importance. That’s why apocalyptic sects and religious groups attract people: you’re the chosen ones, you’re the special ones, the last ones, the ultimate humans, those who will witness “great and terrible things”. Eschatological beliefs have always impressed the weakest minds—that’s why date-based predictions such as the “year 1000 scare”, the “y2k robocalypse” or the “2012 mayan calendar fantasy” are given undue media coverage.

We’re always secretly waiting for something big to change our lives. Because we feel our life’s limitations are intolerable. We’re short-lived, mortal, and most of us ar mediocre pawns living in a depressingly limited world. Give us God, Hal, Gilgamesh, Warp 1, or a Wookie, so we can jump into the dream, so to speak. Give us Spice, Soma, the Fountain of Youth, the Golden Apple, the Catalyst. But at the same time, change scares us. A lot. That’s why Utopia, Dystopia, and the Apocalypse have always been parts of the same narratives.

In a way, we really CAN wipe ourselves out in many ways. And it’s been the case for centuries. lIn the Middle Ages, commercial contacts help spread the Black Death in multiple, deadly waves. The Exploration Age broughts old germs to the New World, which nearly wiped out the Native populations. More recently, the fear of a nuclear apocalypse is something real (and it’s still real in the 21th century). But so is climate change, pollution, groundwater/soil exhaustion, overpopulation, dwindling fish stocks, chemical pollution, and so on. And over all this, any nearby star can harm us with a gamma flash, a major impact even can toast us, a Toba-style volcano can make us starve and freeze to death, and so on.

The coming of Major Change to our world (“utopia”, “revolution”, “destructive events”) would be something terrible, disruptive. Even Minor Change (a moderate sea-level rise, changes in climate patterns, etc.) has the ability to wreck our economy, disrupt our civilization and turn our comfort into misery. We’re both fragile and resilient. I don’t think we’ll nuke ourselves out, or drown in a pool of grey goo, or dig a hole into the fabric of spacetime at the CERN, or end up with USB ports in our lower spine & trapped into a collective Plug&Play; hell.

We’ll just build bigger toys, break these toys, blow up things from time to time, make lots of people miserable, and make our planet stinkier than ever. But at the same time, we’ll understand things better, build better things, cure our illnesses, and eventually, it’ll all get better.

I agree that people in various eras have imagined that they were living at the “end of times”, and that the flattering sense of feeling special can provide a psychological motivation for believing this.

But none of this is conclusive evidence that we are wrong to do so. The most paranoid doomsayer will be right some of the time, as will the most naïve optimist. Conversely, the ones saying “sorry, but the truth is more boring than you want it to be” are not always right.

In my view, whether the coming of major change to our world is terribly disruptive or, well, great, depends a lot on the extent to which such change is the result of deliberate consensual decision-making by present-day humans. I’m not saying that your “more boring” scenario is necessarily wrong, but I’m not convinced that the fact of it being relatively boring makes it more likely to be right. And perhaps more importantly, it lacks the inspirational/motivational value that both the utopic and the catastrophic scenarios have.

So in summary: you may turn out to be right, but that shouldn’t prevent us from considering the other scenarios. In my view they are the ones most likely to motivate us to manage our risks sensibly and propel ourselves towards the best futures.

Q: How can we best eliminate the potential for misunderstanding, escalation of global conflict, adverse political manipulation and stagnation, pursue socioeconomic revolution, and efficient use of planetary resources, and provide free education, awareness and interactive democracy for all, solve difficult engineering problems, crowd source potential for ideas and solutions to global dilemmas, feed the word, free the world from disease, ease suffering and encourage the ethical and spiritual evolution of every human mind on planet Earth?

“The global brain is a metaphor for the worldwide intelligent network formed by people together with the information and communication technologies that connect them into an organic whole. As the Internet becomes faster, more intelligent, more ubiquitous and more encompassing, it increasingly ties us together in a single information processing system, that functions like a “brain” for the planet Earth.”

And to aid with progress.. we need to encourage these machines online with us humans

IBM’s Sequoia is the world’s fastest supercomputer
US system regains title for the US for the first time in almost three years, beating Japan’s Fujitsu K Computer”

Ben’s projection and timescale for human level AGI is rather optimistic, although he does promote greater effort towards the goals. Yet the combination and symbiosis of human minds and supercomputers interconnected via the internet, utilising real-time crowd sourcing and data analysis crunching, effectively manifests as an evolving self reflexive and conscious global entity - and an environment where supercomputers can assimilate knowledge regarding human rationale, subconscious motivations, and acquire knowledge to predict unrest and ward off global existential risks, (in effect learn about what it is to be human and then even simulate this - whilst still remaining a tool to serve human wants and needs)?

This is the fast track towards a singularity nanny, CEV, global vanguard?

Human minds interconnected as individual nodes, (neurons), together with supercomputers are the manifestation of a global brain, (mind), and a natural and techno-logical evolution of the human species to greater levels of complexity of phenomenological consciousness, understanding and sharing?

This is the realisation of utilising and benefiting from the “Hive mind”, whilst also having the ability to disconnect and perpetuate individualism, freedom and protect “Self” identity?

What is there to worry and fret about - aside from those that would wish to place spanners in this work’s progress?

I think we need to be a bit careful with the hive mind / global brain concepts. I find them fascinating, but the analogy with ant, bee and termite colonies could be misleading, since cooperation in those cases is based on the genetic identity of members of the colony. The pathway that homo sapiens has found towards ultrasocial behaviour - namely our large brains - is totally different to that of ants, bees and termites, and involves a much greater degree of competition between individuals and groups in the so-called “hive”. This is not to say that such ideas are without merit, but a note of caution seems appropriate IMO.

To put it another way: it’s not a Manichean struggle between a basically good human nature and an evil few who want to “place spanners in the works”. The problem, and thus the challenge, is much more fundamental than that.

Please understand that this is not to be defeatist or tended as a justification for autocracy, but rather an honest assessment of how I perceive reality, rightly or wrongly.

The “global brain” metaphor can be suggestive, just as the “artificial brain” metaphor can be suggestive for a digital computer - whether conceived in terms of symbolic AI or a connectionist architecture. But the Internet, or an ant colony, or the population of China, or a digital computer (etc) is not a unitary subject of experience - unlike (fleetingly) the 80 billion odd neurons of an awake/dreaming organic mind/brain. I think the extraordinary  computational power of this hugely fitness-enhancing adaptation is best illustrated by rare neurological syndromes in which object binding and/or unitary consciousness partially breaks down, e.g. simultanagnosia (cf. ) cerebral akinetopsia (cf. ) multiple personality disorder, etc . This question might seem a bit technical and obscure. But IMO it’s directly relevant to the issue of existential risk if one is troubled by the IJ Good/SIAI conception of software-based, recursively self-improving intelligent minds in conjunction with Moore’s law leading to an imminent and probably Non-Friendly (cf. ) Intelligence Explosion. In short, will humanity’s successors be our biological descendants? Or instead, might an AGI convert our matter and energy into paperclips - or perhaps utilitronium? 

The best way to learn about what the future holds is by reading the documents created by the think tanks with influence. The British ministry of defense, the CFR, the Bilderberg group, the club of Rome, etc…

These people who are steering the future are not accessible to Transhumanists. Those at the top aren’t interested in an organic future outcome.

Blaming everything on the inherent aggression of male primates rests on dubious foundations and downplays the specific historical circumstances of the nuclear arsenal. Various male primates ranging from bonobos to humans display minimal levels of competition and violence under the right conditions.

Genes and culture co-evolved. Our male human primate predisposition to aggressive behaviour is indeed only conditionally activated. But throughout history, aggressive war has been planned, initiated and waged primarily by men. Likewise, the conception, design and deployment of nuclear weapons systems has been overwhelmingly a male activity. This is not an argument for some kind of crass biological determinism. Rather it’s an acknowledgment of the significant risk-factor entailed in electing a high-testosterone male rather than female political elite. Needless to say, I hope my pessimism about our prospects of escaping nuclear war this century is ill-founded.

All good comments, esp. the last: David is correct, the inclusion of women is a key, perhaps The key, otherwise men will in fact destroy us.. IMO such is a fact. IMO we ought to reject the chirpy visions of the original transhumanists, no one would fall for overly-optimistic scenarios in the 21st century. Yet neither should we accentuate the negative.
Pete can explain it better.
One might say that though the future will be dystopian, it wont necessarily be worse than in the past. Take the scenario of a film made in 1970, ‘Clockwork Orange’; however unpleasant the hooligan portrayal is, it sure beats the conditions of 25 years before the film was produced: Europe 1945. If you will, let me repeat what I wrote in another thread,
“summerspeaker is mistaken to think violence has not diminished, the leveling out, let’s say, of violence is about the only really good news. True, summerspeaker, prisons, jails, and other negative facilities (throw in bad schools for good measure) aren’t any better than in the past considering how much tech has advanced. This is evidence for some of what Giulio writes concerning the public sector: for example in effect the funds to upgrade prisons go to the people at the top of the system, not to upgrade prisons. In fact the greater part of the welfare system—and unlike the private sector it is a system—is basically about poverty-pimping. Say 70- 80 percent of the welfare system concerns the people working for it getting far better medical care (just for starters!) than the system’s clients.
However that is off-topic and IMO it wont change for decades- it took decades to build the poverty-pimp system up, it will take decades to ‘replace’ it.. which is a fact libertopians cannot comprehend. Yet violence has diminished, and summerspeaker ought to admit such since it is the one piece of good news, the one positive we have to offer. We can say: ‘violence has diminished since WWII, it
has leveled out.’ “

And if a future inclusiveness towards women is facilitated, violence can be diminished to a much greater degree. IMO it is v. important to ‘believe’ such, to not sourly tell the public,

‘women are dominated by men [which admittedly they are] men are not going to change and WMDs will remain in their storage areas waiting to be used. War will continue.’

The above is only partly correct, as some can be changed. War may continue albeit at a lower intensity; the WMDs do not have to be used; and genuine inclusion of women in decision-making can be arranged.
To think otherwise is not technoprogressive.

Personally, I’m pessimistic concerning interpersonal relations and crime (Andrei Sakharov termed the set alienation and criminality), but not about the rest of what the future has to offer; I’m getting old, so I’ll take any future—even in a nursing home if it is a decent one. Which brings to mind something simmering in my mind since ‘89, when I first heard about transhumanism: lifespans might be lengthened greatly, yet we don’t know what sort of lives we might be living. Merely to throw out a scenario, someone lives on their own until circa age 130 say (which is fairly optimistic), and then spends another 70 years, just for a number, in a nursing home-type institution. This is something which has to be factored in.

Thanks for the piece, David. Very interesting as usual.

I have a couple of reflections to make.

“Perhaps the least sociologically implausible prevention-measure would be a voluntary transfer of the monopoly of violence currently claimed by state actors to the United Nations.”
Maybe we should remove the monopoly of violence altogether? I am inclined to think that the gargantuan weaponry apparatus of certain nations would not exist if centralized political authorities did not have a monopoly over the production of nukes, drones, and other lethal toys. First of all, because of their costs. Only central governments can obtain such a generous line of credit from the banking system. A privatized, multi-centric defense system would certainly be less hypertrophic. Secondly, as the number of interacting agents increases - with time, their behavioral patterns tend to maximize their individual utility functions. In other words, if the choice of attacking rests in the hands of few subjects, there is a much higher risks that one of them would make catastrophic decisions, without anyone to stop him.

“Bioconservative critics will doubtless worry that “something valuable will be lost” when responsible prospective parents stop playing genetic roulette as the reproductive revolution of “designer babies” unfolds.”

I noticed that you often use the “roulette” metaphor to describe human reproduction. While I agree with you about the fact that - leaving everything to chance presents obvious risks - I cannot accept the eugenic implications of this argument.
First of all, human subjects already make (implicit) eugenic choices when they choose their sexual partners. So, to a certain extent, people already look for certain signs to improve the quality of their offspring.
What disturbs me is the preemptive attack against certain lifeforms and their self-determination. Because this is what eugenics is about - counter-selecting certain lifeforms. I expressed already similar points when I criticized Peg’s position about licensing parenthood. One thing is prosecuting bad parents after they have committed child abuse. Another thing is denying reproductive rights to a number of individuals who display certain risk profiles.
Preemptive attacks are immoral in my opinion. The Bush-doctrine about preemptive war in Iraq is immoral. The notion of forbidding potentially bad parents the right to reproduce is immoral. The idea of culling human zygotes on genetic grounds is immoral. All these preemptive attacks violate the prerogatives of other lifeforms - on the basis of a knowledge that the attacker does not possess. We assume to know in advance all the possible outcomes, all the risks involved, and so we take action. The idea is to intervene violently and beforehand, to prevent a tragic outcome. Facts are that we do not know enough to make such choices. How can we know that a man with Down-Syndrome would not even want to life his life? Can we make that kind of choice is his place? Sometimes out desire to reach an optimum design - and I am not merely talking about genetic engineering here - lead to very fragile products, or to complete failures. We cannot take into account all the factors, sometimes this is essentially impossible.

Do not get me wrong. I would certainly support genetic therapies that neutralize the expression of clearly malevolent genes. I would not object to any type of voluntary treatment, of any kind, for anyone. Also, parents, I believe, can have something to say about weather or not to abort fetuses with certainly short and painful existences. However, as other eugeneticists already did, we risk to suppress so much life (and possibly so much joy), only because of the illusions of our theories.

“Preemptive attacks are immoral in my opinion.”

There are exceptions to this rule: one is that Cambodia ought to have been invaded in 1977 after word had leaked out as to what was going on—the entire country was a concentration camp, and would have had nothing to lose by being invaded by a responsible coalition.

@ Peter..

Well I wasn’t necessarily implying an analogy with insect collectives, (although these are still a part of the Earth’s ecology and we humans could not live without them). No, human brains are much more complex, yet I feel you underestimate just how similar humans are, both in genetic makeup and of social wants and needs. Sure politics and freedom of expression, (leading to conflicts), is a human trait, competition yet also cooperation is aggregated.

I did not mention anything about polarising or good and evil, so not sure where you got that from? However, I envision the global mind, (brain), as a swathing sea of human interconnected consciousness, which ebbs this way and that, where there is no overall controlling factor, yet where supercomputers can pre-empt political discord, and avoid global war and conflicts, (and by introducing a level of bureaucratic control that no one nation or group of states can overrule - yes Skynet can be our friend, our nanny, our watchdog, our guardian? Don’t believe the hyped up gun toting movie dystopia).

Certainly we here cannot deny, that the internet is here to stay, that more and more humans will be connecting online for all of their services and needs, and especially for education and entertainment, and that supercomputers exist and can/will be utilised to “serve” the human online collective as well used to protect against potential existential risks, from social unrest and terrorism, to socioeconomic chaos, to climate catastrophe.

What we need is even more humans connected, including Africa and other third world nations, transforming global awareness and education in these countries, so they can become empowered and then make their own demands for equality, just like the middle east?

And yet autocracy you say? Perhaps I am not so adverse to relinquishing levels of autonomy to rational and objective impartial supercomputers for want of peace and security, and for not just myself, but for humanity as a whole.

@ David..

Not sure what you are implying regarding your references, and would like to know more. If you mean that such reliance upon online activities would create detrimental neurological disorders or even create pseudo human zombies whom relinquish freedom, autonomy, and even will to action and original thought, through sensory overload and constant misdirection? Then I guess these are dangers, and even today humans get hooked into VR and online gaming communities, isolating themselves from the “real” physical world, and unable to break free of their addiction. And there are great dangers of technology being used to manipulate the masses for political ends, or for use to subdue anarchism and subvert original thoughts. Yet my vision on the ever increasing online collective is that it will be impossible to control this ebb and flow of interconnected human consciousness, any more than we can attempt to control the physical quantum world?

@ Intomorrow..

Hmm.. this could be your best comment ever? I am not keen on this idea of spending so many longevity years sitting in an armchair in a nursing home, (unless we can be absorbed in to online utopia that is?) Next best thing to having women in charge of politics and WMDs is to hand over control to an objective supercomputer that will not permit any man to press that big red button, (even if he is the president?)

Men are social and political category, albeit one that references biology. Believing a female political elite less prone to war - especially in the context of still patriarchal societies - strikes me as exactly crass biological determinism. The best historical evidence - admittedly tenuous - suggests human males went without war (at least in our sense of the word) for the majority of their existence as a species, while political elites and mass organized violence go hand in hand. The record of leaders like Margaret Thatcher, Condoleezza Rice, and Madeleine Albright shows that females can do imperialism as well anybody. Beware essentialism.

Please forgive me CygnusX1, I was slipping into philosophical jargon.
By “zombie” I was referring to
Understanding why digital computers are zombies, and biological robots are sentient, is critical IMO to understanding the nature of mind and full-spectrum (super)intelligence. What are the computational advantages of being a unitary subject of experience. How  do communities of biological neurons pull it off?

You raise a number of interesting points about virtual reality. In future, designer worlds can be coded that are more compelling, “real”, and addictive than the traditional world-simulations of primordial reality which we each run now. Compare how behavioural psychologists today experimentally use “supernormal stimuli” to hijack the reward circuitry of nonhuman animals. (cf. ) 
Yet presumably selection pressure will then come into play. Any predisposition to spend one’s life in an Experience Machine (cf. ) rather than to raise biological children in basement reality will be strongly selected against.

So what will be the ultimate fate of life in basement reality? I don’t know.

André, you present an interesting alternative. Which is the lesser of two evils? A multitude of armed actors or a single, democratically superstate with a monopoly of violence ? Both scenarios are ugly. I don’t canvass the latter with any enthusiasm. On balance, IMO a world state may be our best hope of avoiding catastophic nuclear war. Yet I agree the scenario you sketch needs careful consideration.

We agree on the ethical unacceptability of coercive eugenics. However, unless humans edit the ugly genetic source code bequeathed by evolution, then pain, suffering and malaise will last as long as life itself. Can the dilemma be resolved? Part of the solution, I think, lies in education and voluntary genetic counselling - and also in how the issue is framed. Thus if asked, most people would respond that they oppose eugenics. They would also not wish to pass the cystic fibrosis allele on to their children. Indeed many if not most people would also wish to avoid passing on a predisposition to depression or schizophrenia. None need be discouraged from having children - just alerted to the likely health consequences of choosing particular alleles and allelic combinations.

Potential pitfalls? Yes, many.

Summerspeaker, for the evolutionary origins of the human male propensity to violence, perhaps see, e.g.
We have no grounds for biological determinism about aggressive violence - any more than we have grounds for invariably linking, e.g. ethyl alcohol consumption and traffic accidents. Most men don’t wage war most of the time; and most intoxicated drivers don’t cause road fatalities. But high testosterone function and ethyl alcohol consumption respectively are risk factors in each case.

@ summerspeaker:
Doesn’t matter what sense of the word it is, it’s still war whether it was stones that were thrown by slingshots 5th millennium BCE, or ballistic missiles 2012. War means taking out an enemy with all means available, removing the enemies’ ability to wage war. Margaret Thatcher, Condolezza Rice, and Madeleine Albright were victims of Prisoner’s Dilemma: their choice was to be mannish in their chosen professions, or to be marginalized. For women involved in American foreign policy, it is considered better to be Condi Rice wielding power than Betsy Ross sewing a flag.

Some nice utopian imagery in there, I enjoyed it (besides the critical thinking).

I haven’t yet heard anything about North Korea being part of any ‘new world order’ matrix so I think your thoughts about nuclear war are right :

the coalitions that exist are not sufficient to prevent a nation from developing a technology that would put them above the risk/reward cutoff related to the mutually assured destruction principle.

It’s possible that Putin’s country could develop weaponry so sophisticated that they could prevent a nuclear retaliation.

The age of technological uncertainty and unforeseeable growth could give birth to game changing weaponry.

Discussing existential risk can make one sound awfully callous. Thus nuclear war in the Korean peninsular or in the Middle East is likely “only” to kill tens of millions of people, or in the case of war between India and Pakistan, “only” a few hundreds of millions might die. On the other hand, the planetary impact of a full-scale strategic exchange between China and the USA later this century would be unimaginably more dire. 

Not everyone agrees on the meaning of “existential risk”. (cf. anti natalists like David Benatar: )
But here at least, let’s assume that the survival of intelligent life - not least to provide future stewardship of our Hubble volume - should be our overriding priority. If so, what should be done?

Of the two war-prevention methods I touch on above, an elected world government with a monopoly over violence seems more likely to follow rather than precede global thermonuclear war - assuming humanity survives at all. And although electing an all-female political elite would IMO (probably) be technically effective, I simply don’t think such a socio-political revolution is going to happen: too many people, including a lot of transhumanists, find the idea absurd for the idea to be debated on its technical merits. 

Alternatives? Well, the establishment of truly self-sustaining bases on the Moon and Mars would probably cost a trillion or more dollars - almost as much as the world spends annually on arms. Their creation wouldn’t eliminate existential risk; but it might reduce such risk by an order of magnitude or more.  _If_ we think existential risk-reduction should be high if not foremost on our list of political priorities, then campaigning for a manifesto commitment from all of the major political parties to establish such self-sustaining bases would seem worthwhile. Thoughts?

Sex and War looks like exactly the sort of social reductionism I view as a troubling influence on the transhumanist movement. While I don’t completely discount evolutionary psychology in theory - as some of my colleagues in the humanities do - the fixation on biology strikes me as a dangerous distraction away from history, politics, and culture as relevant variables.

“Alternatives? Well, the establishment of truly self-sustaining bases on the Moon and Mars would probably cost a trillion or more dollars - almost as much as the world spends annually on arms. Their creation wouldn’t eliminate existential risk; but it might reduce such risk by an order of magnitude or more.  _If_ we think existential risk-reduction should be high if not foremost on our list of political priorities, then campaigning for a manifesto commitment from all of the major political parties to establish such self-sustaining bases would seem worthwhile. Thoughts?”

Agreed. It appears the increase in destructive power of each individual would signify doing what you suggest is correct.

The purely scientific benefits of self-sustaining Lunar and Martian bases seem unlikely to outweigh the costs. As far as I can tell, the one and only compelling rationale for establishing such bases in the near-term future is existential risk-reduction. The issue hasn’t yet gained traction with politicians, mainstream academia, or the public. On the other hand, who knows what unlikely allies one might accumulate…

Summerspeaker, I fear you’d probably class me as a hardcore biological reductionist. But like all transhumanists, I want to use technology to transcend our biological limitations, not surrender to them.

David Pearce writes: “On theoretical grounds, I predict classical digital computers will never be capable of generating unitary phenomenal minds, unitary selves or unitary virtual worlds. In short, digital computers are invincibly ignorant zombies.”

Sensory input and the way in which the brain internally processes the mind-independent world is no more a simulation than a film about the LHC is a particle accelerator. In both cases, information has been taken in, transformed, and then new information created that is modulated and/or encoded in a such a way that it can be communicated and understood by an interpretant. No physical rules about the universe are being utilized in the generation of these mental objects by the senses, nor their correlated sensory cortices.

Further, let there be a distinction made between the grounding of mental objects of experience and binding them into a unitary mind. What follows first is the examination of the unitary subject:

Our mind is dependent upon the physiology of the brain, which is closed under physics. Abstractly, processes within a system can take place at multiple levels of organization. These levels of organization may be transitive with respect to both scale and systematization.

There is strong evidence that binding is dynamical at the neuronal level of organization, as it relies upon proper connectivity.

Sensory cortices modulate information into structured information patterns that the brain can understand. Studies in pain asymbolia suggest a causal relationship between the sensory cortices and the limbic system. In the syndrome, patients are aware of pain but are lacking the qualia—the ‘raw feel’—that gives it the mark of unpleasantness in healthy individuals.

This results in patients becoming indifferent to destructive pain, despite being aware of it. This is an exegesis that shows us what happens when something spontaneous is demoted to requiring volition to act upon; unless we undergo the unpleasant ‘raw feel’ of pain, we do not withdraw from it.

Let pain be considered as a mental object. This object is composite, much the same way that a word in text can be italic or bold and still retain its meaning. Like the decoration and markup of text, this composite mental object has a continuum of associations and n-ary relations that allow it carry additional meaning. This is a metaphor for what is happening with patients with pain asymbolia. A part of the n-ary relations that deepen the complexity of the signification of pain have been destroyed or severed, causing it to lose its meaning and significance.

What this also signifies is the very real possibility for the argument of philosophical zombies. The idea that an emulation of consciousness can duplicate an intelligent agent, replete with affect and critical reasoning, but will never be a conscious entity capable of the ‘raw feels’ that we both cherish and condemn as part of the human condition.

If an interruption in part of the brain’s connectivity results in a loss of fidelity in the binding of mental objects of experience, then it can be argued inductively that minds are what brains do, and not an emergent quality of some a pantheistic notion of consciousness arising out of the ‘fire’ in the equations of physics. If this were the case, the experience of pain would not diminish as a result of damage to particular area, as we would be conscious “everywhere”. Instead, wherever the seat of consciousness is, it appears it can be internally disconnected in much the same way that one can strip away the color information from a digital image and still retain the majority of its propositional content.

As for the phenomenal experience and the ‘raw feels’ of qualia, the seat of consciousness is either responding to a specific pattern or the brain has evolved to exploit a fundamental character that allows for universal states of pleasure or pain. That is to say, while binding is dynamical, it may be that phenomenal experience is not. Though, it will not be important for an artificial sentience to have the same ontology as we do in order to “understand” what we feel, and what ought to be done in response to that inductive knowledge.

Just as we can never truly feel the pain of others except through a kind of mutual induction, whereby our own theory of mind creates a model that invokes a response in ourselves, simulating what it might or must feel like in the other individual, a benevolent artificial intelligence could be more empathetic than even the most compassionate human being alive today, making us appear “mind-blind” by contrast.

Like a constructionist approach to mathematics, one must take a pragmatic stance when it comes to the behavior of artificial agents: what does it matter if it “feels” exactly what we “feel”, so long as, in the end, it behaves in a way that is copacetic. The same could be said of interactions with people.

Dustin, could you clarify what you mean when you say that the mind/brain isn’t running a simulation of the mind-independent world? The physical/phenomenal states of the mind/brain are not intrinsically about anything external to themselves. But when one is awake, our mental (“perceptual”) states continually track and causally co-vary with gross patterns in the macroscopic environment on account of cross-modally matched input from the optic and auditory (etc) nerve. Peripheral input selects, but doesn’t create, the state sequences of our egocentric world-simulations. When we are dreaming, our world-simulations run more-or-less autonomously and psychotically. When one “wakes up” one doesn’t cease to instantiate a world-simulation; but the contents of that simulation are more tightly constrained. (cf. Antti Revonsuo’s “Inner Presence” for a thorough treatment of the world-simulation metaphor: )

Humans have shown time and again that genuine motivation to “do good” in the world collapses in the presence of wealth, influence, power and control. Personal status trumps all else. The underlying truth of Orwell’s “Animal Farm” is replayed on stages both local and global year after year with no end in sight. The preservation of status at all costs is deeply embedded within our genetic code. The perks that accompany status greatly increase the likelihood that our descendants will cross the next generation finish line. However, ultimately the joke may be on us. What we each perceive as our own self-importance and special place in the world may be no more than our DNA having organically evolved the best possible armor to ensure the passing down of our respective genetic materials. That we believe sentience and intelligence somehow speaks to the superiority of the human condition may, in reality, be nothing more than our genetic code having pulled off one of the greatest acts of deception ever perpetrated—and humanity doesn’t have a clue!

Search the definition of “neural reality” for a closer look at the world simulator that creates what we each perceive as the “real world.”

SDMHI, I share your dark view of human nature, and indeed life on Earth. But the fact remains: humans are the one species capable of rewriting our own genetic source code and engineering the well-being of all sentience in our forward light-cone. In that sense, we are “special”. So it’s vital we survive this critical century without wiping ourselves out.

Like you, I’m not a perceptual direct realist. Indeed the very term “perception” is something of a misnomer: data-driven real-time world-simulations might be more accurate. Yet despite such precarious inferential realism, modern humans are capable of mathematically modelling everything from the Big Bang to the molecular machinery of life. If we couldn’t capture the structural properties of the world, the success of science would be a miracle. And there is nothing unreal or unimportant about pain and pleasure, agony and bliss. For reasons we simply don’t understand, the pain-pleasure axis discloses the world’s inbuilt metric of (dis)value. In principle at any rate, humans will shortly be able to relegate disvaluable experience below “hedonic zero” to history. Alas the death-agonies of the old era may be prolonged.

We will only survive as a specie if we overcome our inborn, or acquired negative aspects of our human nature.

Hartmut—I am posting in a comment from David Pearce, directed at you - (he had trouble logging in).

(FROM DAVID PEARCE): Hartmut, I agree. Hence the urgency of rewriting our genetic source code. But will we have time to bootstrap our way to full-spectrum superintelligence before destroying ourselves in thermonuclear war?

On this count, I’m an optimist. I suspect “only” tens or hundreds of millions of people are likely to die in armed conflict this century. Compare last century’s toll of around 100 million. However, darker scenarios can be sketched too. There are too many variables to quantify their likelihood.

YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Eugenics 2.0: Prometheus, Power & The Procreation Delusion

Previous entry: Multidimensional Thinking and Tomorrow’s Women


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @