Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Rituals Improve Life According to Ancient Chinese Philosophers

The Future of PR in Emotionally Intelligent Technology

Optimize Brain Health by Balancing Social Life with Downtime

Les membres bioniques seront-ils un jour à la mode ?

Multitasking Is a Myth, and to Attempt It Comes at a Neurobiological Cost

Faithfulness—The Key to Living in the Zone


ieet books

Philosophical Ethics: Theory and Practice
Author
John G Messerly


comments

almostvoid on 'Optimize Brain Health by Balancing Social Life with Downtime' (May 23, 2016)

instamatic on 'Faithfulness--The Key to Living in the Zone' (May 22, 2016)

R Wordsworth Holt on 'These Are the Most Serious Catastrophic Threats Faced by Humanity' (May 22, 2016)

Giulio Prisco on 'Faithfulness--The Key to Living in the Zone' (May 22, 2016)

Giulio Prisco on 'Faithfulness--The Key to Living in the Zone' (May 22, 2016)

spud100 on 'Guaranteed Mirage Income?' (May 22, 2016)

John G Mess on 'Ethicists Generally Agree: The Pro-Life Arguments Are Worthless' (May 21, 2016)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Ethicists Generally Agree: The Pro-Life Arguments Are Worthless
May 17, 2016
(4016) Hits
(10) Comments

Artificial Intelligence in the UK: Risks and Rewards
May 12, 2016
(3132) Hits
(0) Comments

Nicotine Gum for Depression and Anxiety
May 5, 2016
(2961) Hits
(0) Comments

3D Virtual Reality Is the Best Storytelling Technology We’ve Ever Had
May 5, 2016
(2796) Hits
(1) Comments



IEET > Security > SciTech > SpaceThreats > Vision > Psychology > Affiliate Scholar > Phil Torres

Print Email permalink (9) Comments (3839) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Top Three Strategies for Avoiding an Existential Risk


Phil Torres
By Phil Torres
Ethical Technology

Posted: Feb 13, 2016

Since the first species of Homo emerged in the grassy savanna of East Africa some 2 million years ago, humanity has been haunted by a small constellation of improbable existential risks from nature. We can call this our cosmic risk background. It includes threats posed by asteroid/comet impacts, super volcanic eruptions, global pandemics, solar flares, black hole explosions or mergers, supernovae, galactic center outbursts, and gamma-ray bursts. While modern technology could potentially protect us against some of these risks — such as asteroids that could induce an “impact winter” — the background of existential dangers remains more or less unchanged up to the present.

But our existential predicament in this morally indifferent universe underwent a significant transformation in 1945, when scientists detonated the first atomic bomb in the New Mexican desert of Jornada del Muerto (meaning “single day’s journey of the dead man”). This event inaugurated a qualitatively new epoch in which existential risks can derive from both nature and humanity. (In some cases, the same risk category, such as pandemics, can derive from both.) Today, existential riskologists have identified a swarm of risks looming ominously on the threat horizon of the twenty-first century — risks associated with climate change, biodiversity loss, biotechnology, synthetic biology, nanotechnology, physics experiments, and superintelligence. The fact is that there are more ways for our species to bite the dust this century than ever before in history — and extrapolating this trend into the future, we should expect even more existential risk scenarios to arise as novel dual-use technologies are developed. One might even wonder about the possibility of an “existential risk Singularity,” given the exponential development of dual-use technologies. 

But not only has the number of scenarios increased in the past 71 years, many riskologists believe that the probability of a global disaster has also significantly risen. Whereas the likelihood of annihilation for most of our species’ history was extremely low, Nick Bostrom argues that “setting this probability lower than 25% [this century] would be misguided, and the best estimate may be considerably higher.” Similarly, Sir Martin Rees claims that a civilization-destroying event before the year 02100 is as likely as getting a “heads” after flipping a coin. These are only two opinions, of course, but to paraphrase the Russell-Einstein Manifesto, my experience confirms that those who know the < most tend to be the most gloomy

In my forthcoming book on existential risks and apocalyptic terrorism, I argue that Rees’ figure is plausible. To adapt a maxim from the philosopher David Hume, wise people always proportion their fears to the best available evidence, and when one honestly examines this evidence, one finds that there really is good reason for being alarmed. But I also offer a novel — to my knowledge — argument for why we may be systematically underestimating the overall likelihood of doom. In sum, just as a dog can’t possibly comprehend any of the natural and anthropogenic risks mentioned above, so too could there be risks that forever lie beyond our epistemic reach. All biological brains have intrinsic limitations that constrain the library of concepts to which one has access. And without concepts, one can’t mentally represent the external world. It follows that we could be “cognitively closed” to a potentially vast number of cosmic risks that threaten us with total annihilation. This being said, one might argue that such risks, if they exist at all, must be highly improbable, since Earth-originating life has existed for some 3.5 billion years without an existential catastrophe having happened. But this line of reasoning is deeply flawed: it fails to take into account that the only worlds in which observers like us could find ourselves are ones in which such a catastrophe has never occurred. It follows that a record of past survival on our planetary spaceship provides no useful information about the probability of certain existential disasters happening in the future. The facts of cognitive closure plus the observation selection effect suggest that our probability conjectures of total annihilation may be systematically underestimated, perhaps by a lot.

However theoretically pessimistic one might be given such considerations, the practical optimist wants to know what can be done to maximize the probability of an “okay outcome” for humanity. In other words: What can we do to avoid a worse case scenario for the self-described “wise man” (Homo sapiens)? What strategies could help us navigate the expanding wilderness of risks before us? Obvious answers include making sure that NASA’s Near Earth Objects project (which aims to spy on possible assassins from the sky) is properly funded; implementing environmental regulations to reduce carbon emissions; improving our educational system and making Epistemology 101 a required course (a project that I very strongly advocate); supporting institutions like the IEET, FHI, and CSER; creating permanent bunkers underground; and staggering the development of advanced technologies like molecular manufacturing and superintelligence. (As I put it in the book, “in the end, there may be a way to slalom around the threats before us, since we’re whooshing downhill anyways.”)

But three strategies in particular stand our as especially promising, at least when considered in the context of what I call “Big Future” (the other half of “Big History”). I explore these in detail in The End, but for now will provide an abbreviated account of them, in no particular order:

(1) Superintelligence. To modify an often-cited phrase from IJ Good, the creation of a superintelligent mind could be the last problem we ever have to solve. This includes problems of the existential risk variety. The idea is that a friendly superintelligence who genuinely cares about our well-being and prosperity could help guide us through the bottleneck of heightened hazards that defines the current century. It could help us neutralize the explosion of anthropogenic — or more specifically, technogenic — risks that threaten civilization with a catastrophe of existential proportions. Even more, a “qualitative” superintelligence with different concept-generating mechanisms could potentially see cosmic risks to which humanity is conceptually blind. Just as a dog wandering the streets of Hiroshima on August 6, 1945 couldn’t have possibly understood that it was about to be vaporized — that is, given the conceptual limitations intrinsic to its evolved mental machinery — so too could we be surrounded by risks that are utterly inscrutable to us, as discussed above. A qualitative superintelligence could potentially identify such risks and warn us about them, even though such a warning would be unintelligible to even the most clever human beings.

Unfortunately, the creation of a superintelligence also poses perhaps the most formidable long-term risks to the future of our lineage. To paraphrase Stephen Hawking, if superintelligence isn’t the best thing to ever happen to us, it will probably be the worst. There are several issues worth mentioning here. First, the amity-enmity problem: the AI could dislike us for whatever reason, and therefore try to kill us. Second, the indifference problem: the AI could simply not care about our well-being, and thus destroy us because we happen to be in the way. And finally, the clumsy fingers problem: the AI could inadvertently nudge us over the cliff of extinction rather than intentionally pushing us. This possibility is based on what might be called the “orthogonality thesis of fallibility,” which states that higher levels of intelligence aren’t necessarily correlated with the avoidance of certain kinds of mistakes. (The avoidance of some mistakes, of course, would be a convergent instrumental value of intelligent agents.)

Consider the case of Homo sapiens. We have highly developed neocortices and much greater encephalization quotients than any other species, yet we’re also the culprits behind the slow-motion catastrophes of climate change and the sixth mass extinction event, both of which threaten our planet with environmental ruination. Even more, the fruits of our ingenuity — namely, dual-use technologies — have introduced brand new existential risk scenarios never before encountered by Earth-originating life. If intelligence is “a kind of lethal mutation,” as Ernst Mayr once intimated in a debate with Carl Sagan, then what might superintelligence be? Indeed, given the immense power that a superintelligence would wield in the world — perhaps being able to manipulate matter in ways that appear to us as pure magic — it could take only a single error for such a being to trip humanity into the eternal grave of extinction. What can we say? It’s only superhuman.

(2) Transhumanism. I would argue that we’re entering a genuinely unique period in human history in which the “instrumental rationality” of our means is advancing far more rapidly than the “moral rationality” of our ends. (I suggest that this means-ends mismatch could offer a specific solution to the Fermi Paradox in a forthcoming Humanist article.) Our ability to manipulate and rearrange the physical world is growing exponentially — perhaps according to Ray Kurzweil’s “law of accelerating returns” — as a result of advanced technologies like biotechnology, synthetic biology, nanotechnology, and robotics. Some of these technologies are also becoming more accessible to smaller and smaller groups. At the extreme, the dual trends of power and accessibility could enable terrorist organizations or lone wolves to wreak genuinely unprecedented havoc on society. The future is, it seems, one in which the power of individuals could eventually equal the power of the state itself, at least in the absence of highly invasive surveillance systems.

This being said, if the capacity to destroy the world becomes widely distributed among populations, what kind of person would want to press the “obliterate everything” button? A few possibilities come to mind: first, a deranged nutcase with a grudge against the world. A example of this mindset comes from Marvin Heemeyer, who committed suicide after building a “futuristic” tank and demolishing an entire village with which he had a dispute. Another possibility is an ecoterrorist group that’s convinced that Gaia would be better off without Homo sapiens. These are both major agential risks moving forward, and thus existential riskologists ought to keep their eyes fixed on them as the twenty-first century unfolds. But there’s yet another possibility that could pose an even greater overall threat in the future, namely apocalyptic religious cults. Unfortunately, few existential scholars are aware of the extent to which history is overflowing with apocalyptic groups that not only believed in an imminent end to the world, but enthusiastically celebrated it.

At the extreme, some of these groups have adopted what the scholar Richard Landes calls an “active cataclysmic” approach to eschatology, according to which they see themselves as active participants in an apocalyptic narrative that’s unfolding in realtime. For groups of this sort, the value of post-conflict group preservation that guided the actions of Marxist, anarchist, and nationalist-separatist in the past simply doesn’t apply. Active cataclysmic movements don’t merely want a fight, they want a fight to the death. On their view — held with the unshakable firmness of faith — the world must be destroyed in order to be saved, and sacrificing the worldly for the otherworldly is the ultimate good in the eyes of God. This is why I focus specifically on religion in my book: history is full of apocalyptic movements, and in fact (as I elaborate in a forthcoming Skeptic article) there are compelling historical, technological, and demographic reasons for believing that a historically anomalous number of such movements will arise in the future. For reasons such as these, apocalyptic activists constitute arguably the number one agential threat moving forward. Existential riskologists must not overlook this fact — we must not shy away from criticizing religion — because tools without agents aren’t going to initiate a global catastrophe.

The point is that such considerations make it hard to believe that our species can be trusted with advanced technologies. We’re no longer children playing with matches, we’re children playing with flamethrowers that could easily burn down the whole global village. Either we need a “parental” figure of some sort to watch over us — option “(1)” above — or we need to grow up as a species. And it’s here that transhumanism enters the picture in a huge way. The transhumanist wants to use technology to modify the human form, including our brains, in various desirable ways. Scientists have already designed pills that can modify our moral character by making us more empathetic. Perhaps there are cognitive enhancement technologies that can augment our mental faculties and, in doing so, inoculate us against certain kinds of delusions — and therefore neutralize the agential risk posed by apocalyptic extremism.

To put this idea differently, transhumanists actively hope for human extinction. But not the sort of extinction that terminated the dinosaurs or dodo. Rather, the aim is to catalyze a techno-evolutionary process of anagenetic cyborgization that results in our current population being replaced by a smarter, wiser, and more responsible form of posthuman: Posthumanus sapiens. It may be that, as I write in the book, what saves us from a “bad” extinction event is a “good” extinction event. Or, to put this idea in aphoristic form: to survive, we must go extinct.

(3) Space colonization. I would argue that this offers perhaps the most practicable strategy for avoiding an existential catastrophe, all things considered. It requires neither the invention of a superintelligence nor the sort of radical cognitive enhancements discussed above. The idea is simple: the wider we spread out in the world, the less chance there is that a single event will have worldwide consequences. A collapse of the global ecosystem on Earth wouldn’t affect colonies on Mars, nor would a grey goo disaster on (say) Gliese 667 Cc affect those living on spaceship Earth. Similarly, a disaster that wipes out the Milky Way in 1,000 years might be survivable if our progeny also resides in the Andromeda Galaxy.

As it happens, NASA recently announced that there will be Earth-independent colonies on Mars by the 2030s, and Elon Musk has said that he’s hoping to launch the first flight to Mars “in around 2025.” As Musk described his motivation in 2014, “there is a strong humanitarian argument for making life multi-planetary . . . in order to safeguard the existence of humanity in the event that something catastrophic were to happen.” This sentiment was echoed by the former NASA administrator, Michael Griffin, who claimed that “human expansion into the solar system is, in the end, fundamentally about the survival of the species.” Similarly, Hawking has opined that he doesn’t “think the human race will survive the next thousand years, unless we spread into space.” So, there’s growing momentum to distribute the human population throughout this strange universe in which we find ourselves, and numerous intellectuals have explicitly recognized the existential significance of space colonization. Given the minimal risks involved, the relatively minimal cost of colonization programs (for example, it requires neither “(1)” nor “(2)” to be realized), and the potential gains of establishing self-sustaining colonies throughout the galaxy, this strategy ought to be among the top priorities for existential risk activists. To survive, we must colonize.

It’s worth noting here that while colonization would insulate us against a number of potential existential risks, there are some risks that it wouldn’t stop. A physics disaster on Earth, for example, could have consequences that are cosmic in scope. For example, the universe might not be in its most stable state. Consequently, a high-powered particle accelerator could tip the balance, resulting in a “catastrophic vacuum decay, with a bubble of the true vacuum expanding at the speed of light.” (Again, perhaps a superintelligence could help us avoid a mistake of this sort.) Another possibility is that a bellicose extraterrestrial species with a powerful space military and a foreign policy that encourages preemptive war could conquer our astronautical descendants by destroying one inhabited planet at at time. At the extreme, no amount of spreading throughout the universe would protect us against an aggressive, rogue civilization like this. And finally, space colonization also won’t protect us against the ultimate Great Filter, namely the entropy death of the universe. This distant future event appears inevitable given the second law of thermodynamics, although some cosmologists have suggested that there may be a way to pull off the greatest prison escape of all by slipping into a neighboring universe.

Our situation in the universe has always been precarious, but it’s even more so this century. These are, in my view, the three most promising “big-picture” strategies for surviving the obstacle course of existential risks in front of us.


Phil Torres is an author and artist. His forthcoming book is called The End: What Science and Religion Tell Us About the Apocalypse (Pitchstone Publishing). You can contact him here: philosophytorres@gmail.com.
Print Email permalink (9) Comments (3840) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


@Torres

I will gladly download your new book tomorrow, its release date.

I totally agree with, and think about, your trio of solutions is wisdom itself. My only two opinions, based on my experience of life, works of history, and the conclusions of anthropology. I am not an anthropologist, but often find useful things tro read about in that science, whether its, cultural, physical, or biological, anthropology.  Often, power vacuums, cause instability, as well as over-reaching war mongers. Or, rather, there develops a synergy between perceived weakness and perceived strengths, to the human brain.

One possibility, I will now raise, is Hobbes Leviathan. Which if basically if there is not enough peace, there is not enough commerce, which then leads to more conflict. In fact, I read once in a Physorg extract, that the people’s of the middle east, favored economic advances over peace agreements, which never last, while the emotional human reaction to economic goodies, seems to have solidity, underpinning things, psychologically speaking. I refer to Hobbes Leviathan, as
a force for calm (not peace!). My ignorance was lifted, a bit, by professor Ian Morris, at Stanford’s book, “War What is it Good For?” A brilliant book, and yes, that’s the title.

http://www.amazon.co.uk/War-conflict-civilisation-primates-robots/dp/184668417X

This might mitigate against human initiated catastrophe’s, as opposed to asteroids, super volcano’s, gamma ray tidal waves from beyond the stars.

A second path, is my conception of how religion, might be used to the advantages of calm, and thus, human survival. My focus would be to develop religious answers to mentally, existential, thinking that drives, all human beings. I am not a psychologist, but based on interactions with Muslims I have met, and the current, low-grade war between us, it might call for a fix. This fix (no small order!) would be to come up with a plausible afterlife scenario, that is consistant with asstronomy and physics, and especially, computation.

People like Moravec and Tipler have dabbled in this, and more recently, fellow IEET contributors, physicist, Guilio Prisco, and AI leader, Ben Goertzel, have focused on this, with their upgrade of the old, Russian, Cosmism, field of philosophy. I once asked a Muslim poster on the Kurzweil forum, what motivated the central focus of the Jihadists, and Islamists, and their actions. The writer concluded, based on his upbringing in Pakistan, that primarily, it was fear of eternal death, and, or, a permanent, fiery hell, that drove them onward, rather than, as been said, Paradise (Jannah) with its virgins, foods, and good weather.  Sounds a bit like Vegas, minus the virgins of course.

The reason I wrote about all this, is that I am believing that if the greater part of our species, took to this philosophy, it would promote calm and thus, survival, plus, if it’s developed and researched, it might also have the curious, benefit, of being true. I like to think of it as a developing a mental app that can benefit both atheist and believer, alike, and help us survival.

In any case I will download your book tomorrow morn. Much thanks.





Hi Phil,

I’d like to point out that Transhumanism is not about human extinction.  I know of no one who would say this or hope for it. I think you miss a very important point.  That point is that transhumanists desire diversity and for a transhumanist to survive means overcoming human limitations, not being human. If a person wants to remain human, that is his/her choice.





Natasha:

Thanks for the comment! A few (very) quick thoughts about it:

As far as I can tell, you make two points. One is that “Transhumanism is not about human extinction” and the other is that “If a person wants to remain human, that is his/her choice.”

With respect to the first, you claim that “transhumanists desire diversity.” I would disagree. I think transhumanists (explicitly, unambiguously) desire a better, superior, improved, more worthwhile and valuable mode of being: call it posthumanity. As Bostrom writes in a 2008 paper (echoing many other thinkers), there are posthuman states that may be immensely better than our current state. He further argues that transitioning to such states would be “good” (in the moral sense). In other words, in a perfect world, posthumanity would take the place of humanity, and everyone would be better off as a result.

The issue of diversity only enters the picture when it comes to the moral question of whether or not transitioning to a better state via enhancement technologies (broadly construed) should be compulsory — and on my reading of the literature, virtually no one argues that others should be forced to abandon their humanity. If someone want to remain a lowly human, fine.

So, I would say the exact opposite: I don’t really know of any major transhumanist figure who doesn’t hope for the extinction of our species (although few if any have actually couched it this way, which is why I like it). To quote Bostrom again, this time from his “Transhumanist Values” paper, humanity is a “half-backed” “work-in-progress.” It continues: “Transhumanists hope[!] that by responsible use of science, technology, and other rational means we [humanity] shall eventually manage to become posthuman, beings with vastly greater capacities than present human beings have.”

In sum, I would strongly disagree with your first point and strongly agree with your second.

Finally, it’s perhaps worth pointing out that, as numerous transhumanists have observed, it seems quite likely that those who resist posthumanity will ultimately vanish from spaceship Earth, given their inferior status in such a future world. As Kurzweil responds to Ned Ludd in a fake conversation (from The Singularity is Near), after Ned expresses his obstinate predilection for the biological status quo: “Well, if you’re speaking for yourself, that’s fine with me. But if you stay biological and don’t reprogram your genes, you won’t be around for very long to influence the debate.” Kurzweil makes clear here that people should have the right to remain human, but that doing so is ultimately a losing choice. And I would agree.





Phil,  On my first point, you seem to rely on Bostrom, who is an agile thinker but does not represent the scope of transhumanism.

On the second point, Kurzweil agrees with the principle of Morphological Freedom, a very unique and valued transhumanist policy. You could dig a little deeper here.

Relatedly, diversity is essential. Consider a fine thinker: Martine Rothblatt.

Regarding perfection: I do not believe in “perfect” worlds and find that to be in contrast to transhumanism. Perfect is a state of perfection and the transhumanist vision is to continually improve to a desired state of existence. This is a process, not an end-zone.

Read _Transhumanism and Its Critics_ . Also read _The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future_. http://www.amazon.com/The-Transhumanist-Reader-Contemporary-Technology-ebook/dp/B00BQZK6MU/ref=dp_kinw_strp_1

http://www.amazon.com/Transhumanism-Critics-Gregory-R-Hansell-ebook/dp/B004NBZABQ/ref=dp_kinw_strp_1

These writings will give you a more complete resource.

Finally, since you didn’t realize it before, now you know another major transhumanist thinker.

Natasha





Natasha:

Thanks again for these thoughtful comments. I don’t think that Bostrom represents all of transhumanism, a diverse tradition of overlapping ideas, to be sure. (Indeed, there are Mormon transhumanists!) I do think I could have been more specific with a citation that qualifies the sense of “transhumanism” to which I’m referring (e.g., to the Transhumanist Values article in which Bostrom says almost exactly what I say, namely that transhumanism—on his view—hopes for the replacement of humanity with a better species of posthumans). But I’m not sure that a short article like this warrants such precision.

Also, I don’t suggest that Kurzweil opposes morphological freedom, so I’m unclear why that was mentioned.

Furthermore, the reference to a “perfect world” doesn’t entail an anti-transhumanist belief in perfect worlds. I use it in the colloquial sense, to refer to a regulative ideal: “In a perfect world, I would give my coffee money to children dying of starvation.” I’m not making any metaphysical claims about the existence of perfect worlds, simply stating that *if* things were perfect, such-and-such would be the case.

And finally, I’m not sure why you think I “didn’t realize it before,” referring to “another major transhumanist thinker.” I wouldn’t simply assume what others know or don’t know, as this goes against the principle of charity.

Nonetheless, I appreciate the book recommendations, and indeed I highly recommend them to others who might not have had a chance to peruse the many fascinating chapters that they contain.





Spud100:

Thanks so much for the comment! I think you make some really interesting points. I’ve read scholars of religious terrorism who’ve said something like: during periods of calm, religion often provides an effective peaceful outlet for the violent impulses of our primate species. We talk about Jesus’ crucifixion, eat his body and drink his blood, partake in rituals that commemorate past tragedies (such as the battle of Karbala, in the Shi’ite tradition), and discuss future catastrophic events that will usher in a new, gloriously remade world right here on Earth. Such beliefs can help keep the peace. (On the flip side, though, when societal structures begin to break down, these very same beliefs can motivate individuals to engage in acts of horrific aggression and barbarity!)

I’m not sure what the answer is. I’m inclined to say, in tentatively absolute terms (if you don’t mind the oxymoron), that we really can’t proceed forward if our beliefs aren’t properly hinged to reality via the best available evidence—and never faith. In other words: how can one hope to navigate a forest if one believes that the trees are located in different places than they actually are. Surely this is a recipe for a concussion! But perhaps you’re right and a local optimum for humanity is a set of “religious” beliefs that help to maintain some degree of order. I’ll have to give this more thought!

Thanks again.





Thanks, Phil, and I just downloaded your new book by yourself and, Blackford, as I mentioned yesterday. I tend to agree with Madame, Vita-More’s point that not all transhumanist thinkers are anti-biological, the sense of wanting the species to end so as to become electron or photonic patterns dwelling inside computers. It may, indeed, happen this way-given long enough, human survival to develop such uploading methods. Surprsingly, Frank Tipler agreed with the total species upload idea, in his book, Physics of Christianity. The thought itself gives me the willies, but I consider this a personal twitch, and each to their own.

Beating the human caused existential threats, tend to rely, more on, being able to join in Hobbes Leviathan, to ensure ‘calm through commerce,’ which is now erroding, even as I type. This, takes both good intentions (Peace, Man!), as well as a willingness to be “bloody minded,” as the British used to say (War, Man!), as to better ensure human survival and technical progress. This was why I suggested Stanford professor, Ian Tattersal’s book on War, be added, at some point to your reading list. It fits in with your own books in an interesting way.

Lastly, yeah, the existential thing is what drives us on, at least according to UK philiosopher, Stephen Caves who’s book, Death, is also informative. A psychological and eventually, a physical fix for this can take many forms, not the least is some sort of resurrection. Dr. Prisco’s musing about all this he has jokingly, termed, “slow uploading.” He, and Ben Goertzel do not claim that all this is doable in the next 100 years, but would be the labor of centuries, if not millenia. However, this was Moravec and Tipler’s contentions as well. Let’s just say that it’s probably, not a great idea, spending one’s life, awaiting the arrival of the Great Pumpkin, Charlie Brown (which was Shultz’s point!) however, deeming the Great File Resoring (as I term it) would help in keeping things calmer, on planet Earth, IF more people found scientifically plausible. Thus, less existential conflict.

Best Regards,

Mitch





Hi everybody!
I have created a roadmap of x-risks prevention, and I think that it is full and logically ordered. I would like to get comments from the community on it. The pdf is here: http://immortality-roadmap.com/globriskeng.pdf

I also do not agree with the portrait of transhumanism suggested here. Transhumanism is a strive for immortality in its core.





Interesting break down of existential threats, A. Turchin, and their possible remediation. Excellent.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: A robot that runs and swims like a salamander

Previous entry: The Cosmos and the Brain - a Great Week for Science

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org