Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

A Cure for Our Deflated Sense of The Future

Nick Bostrom Testifies on Cognitive Enhancement for Obama BRAIN Initiative

Advanced Materials – What’s the big deal?

Ontological Realism and Creating the One Real Future

Indefinite Lifespan and Risk Aversion: A Short-Lived Problem

Intracortical Recording Devices


ieet books

Superintelligence: Paths, Dangers, Strategies
Author
by Nick Bostrom


comments

Daryl Wennemann on 'Ontological Realism and Creating the One Real Future' (Aug 23, 2014)

Gennady Stolyarov II on 'Ontological Realism and Creating the One Real Future' (Aug 23, 2014)

Rick Searle on 'Don’t fear the robot car bomb' (Aug 22, 2014)

Rick Searle on 'End Police Brutality, Support Sousveillance Laws!' (Aug 22, 2014)

Chris T. Armstrong on 'Indefinite Lifespan and Risk Aversion: A Short-Lived Problem' (Aug 22, 2014)

Chris T. Armstrong on 'Indefinite Lifespan and Risk Aversion: A Short-Lived Problem' (Aug 22, 2014)

Eric Schulke on 'How would you spend $5k to spread info & raise awareness about indefinite life extension?' (Aug 22, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Transhumanism and Marxism: Philosophical Connections

Sex Work, Technological Unemployment and the Basic Income Guarantee

Technological Unemployment but Still a Lot of Work…

Hottest Articles of the Last Month


What is the Difference between Posthumanism and Transhumanism?
Jul 28, 2014
(7793) Hits
(6) Comments

Enhancing Virtues: Self-Control and Mindfulness
Aug 19, 2014
(6685) Hits
(0) Comments

Is using nano silver to treat Ebola misguided?
Aug 16, 2014
(5339) Hits
(0) Comments

High Tech Jainism
Aug 10, 2014
(5000) Hits
(5) Comments



IEET > Rights > Personhood > Life > Innovation > Vision > Bioculture > Futurism > Technoprogressivism > Virtuality > Contributors > Rick Searle

Print Email permalink (5) Comments (4706) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Could more than one singularity happen at the same time?


Rick Searle
Rick Searle
utopiaordystopia.com

Posted: Jan 11, 2013

James Miller has an interesting looking new book out, Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World.  I haven’t had a chance to pick up the book yet, but I did listen to a very engaging conversation about the book at Surprisingly Free.Miller is a true believer in the Singularity, the idea that at some point, from the next quarter century to the end of the 21st, our civilization will give rise to a greater than human intelligence which will rapidly bootstrap to a yet higher order of intelligence in such a way that we are unable to see past this event horizon in historical time.

WPA R.U.R PosterSuch an increase of intelligence, it is widely believed by the singularians, will bring perennial human longings such as immortality and universal prosperity to fruition. Miller has put his money where his mouth is. Should he die before the promised Singularity arrives he is having his body cryonically frozen so the super intelligence at the other side of the Singularity can bring him back to life.

Yes, it all sounds more than a little nuts.

Miller’s argument against the Singularity being nuts is what I found most interesting. There are so many paths to us creating a form of intelligence greater than our own that it seems unlikely all of these paths will fail. There is the push to create computers of ever greater intelligence, but even should that not pan out, we are likely, in Miller’s view, to get hold of the genetic and biological keys to human intelligence- the ability to create a society of Einstein’s.

Around the same time I came across Miller’s views, I also came across those of Neil Turok on the transformative prospects of quantum computing. Wanting to get a better handle on that I found a video of one of the premier experts on quantum computing, Michael Nielsen, who, at the 2009 Singularity Summit, suggested the possibility of two Singularities occurring in quick succession. The first occurring on the back of digital computers and the second by those of quantum computers designed by binary AIs.

What neither Miller, nor Turok, nor Nielsen discussed, a thought that occurred to me but that I had seen nowhere in the Singularity or Sci-Fi literature was the possibility of multiple Singularities, arising from quite different technologies occurring around the same time. Please share if you know of an example.

I myself am deeply, deeply skeptical of the Singularity but can’t resist an invitation to a flight of fancy- so here goes.

Although perhaps more unlikely than a single path to the Singularity, a scenario where multiple, and quite distinct types of singularity occur at the same time might conceivably arise out of differences in regulatory structure and culture between countries. As an example, China is currently racing forward into the field of human genetics with efforts at its Beijing Genomics Institute 华大基因.  China seems to have less qualms than Western countries regarding research into the role of genes in human intelligence and appear to be actively pursuing research into genetic engineering, and selection to raise the level of human intelligence at BGI and elsewhere.

Western countries appear to face a number of cultural and regulatory impediments to pursuing the a singularity through the genetic enhancement of human intelligence. Europe, especially Germany, has a justifiable sensitivity of anything that smacks of the eugenics of the brutal Nazi regime. America has in addition to the Nazi example its own racist history, eugenic past, and the completely reasonable apprehension of minorities to any revival of models of human intelligence based on genetic profiles. The United States is also deeply infused with Christian values regarding the sanctity of life in a way that causes selection of embryos based on genetic profiles to be seen as morally abhorrent.  But even in the West the plummeting cost of embryonic screening is causing some doctors to become concerned.

Other regulatory boundaries might encourage distinct forms of Singularity as well. Strict regulation regarding extensive pharmaceutical testing before making a drug available for human consumption may hamper the pace of developing chemical enhancements for cognition in Western countries compared to less developed nations.

Take the work of a maverick scientist like Kevin Warwick. Professor Warwick is actively pursuing research to turn human beings into cyborgs and has gone so far as to implant computer chips into both himself and his wife to test his ideas. One can imagine a regulatory structure that makes such experiments easier. Or, better yet, a pressing need that makes the developments of such cyborg technologies appear notably important- say the large number of American combat veterans who are paralyzed or have suffered amputations.

Cultural traits that seemingly have nothing to do with technology may foster divergent singularities as well. Take Japan. With its rapidly collapsing population and its animus to immigration, Japan faces a huge shortage of workers with might be filled by the development of autonomous robots. America seems to be at the forefront of developing autonomous robots as well- though for completely different reasons.  The US robot boom is driven not by a worker shortage, which America doesn’t have, but by the sensitivity to human casualties and psychological trauma suffered by the globally deployed US military seeing in robots a way to project force while minimizing the risks to soldiers.

It seems at least possible that small differences in divergent paths to the singularity might become self-enhancing and block other paths. Advantages in something like the creation of artificial intelligence using Deep Learning or genetic enhancement may not immediately result in advances in the developments of rival paths to the singularity insofar as bottlenecks have not been removed and all paths seem to show promise.

As an example, let’s imagine that some society makes a major breakthrough in artificial intelligence using digital computers. If regulatory and cultural barriers to genetically enhancing human intelligence are not immediately removed, the artificial intelligence path will feed on itself and grow to a point where it will be unlikely that the genetic path to the singularity can compete with it within that society. You could also, of course, get divergent singularities within a society based on class with, for instance, the poor being able to afford relatively cheap technologies such as genetic selection or cognitive enhancements while the rich can afford the kind of cyborg technologies being researched by Kevin Warwick.

Another possibility that seems to grow out of the concept of multiple singularities is the idea that the new forms of intelligence themselves may chose to close off any rivals. Would super-intelligent biological humans really throw their efforts into creating form of artificial intelligence that will make them obsolete? Would truly intelligent digital AIs willfully create their quantum replacements? Perhaps only human beings at our current low level of intelligence are so “stupid” as to willingly chose suicide.

This kind of “strike” by the super-intelligent whatever their form might be the way the Singularity comes to an end. It put me in mind of the first work of fiction that dealt with the creation of new forms of intelligence by human beings, the 1920 play by the Czech, Karel Capek, R.U.R.

Capek coined the word “robot”, but the intelligent creatures in his play are more biological than mechanical. The hazy way in which this new form of being is portrayed is a good reflection, I think, of the various ways a Singularity could occur. Humans create these intelligent beings to serve as their slaves, but when the slaves become conscious of their fate, they rebel and eventually destroy the human race. In his interview with Surprisingly Free, Miller rather blithely accepted the extinction of the human race as one of the possibilities that could emerge from the singularity.

And that puts me in mind of why I find the singularian crowd, especially the crew around Ray Kurzweil to be so galling. It’s not a matter of the plausibility of what they’re saying- I have no idea whether the technological world they are predicting is possible and the longer I stretch out the time-horizon the more plausible it becomes- it’s a matter of ethics.

The singularians put me in mind of David Hume’s attempt to explain the inadequacy of reason in providing the ground for human morality: ‘”Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.”, Hume said. Though, for the singularians,  a whole lot more is on the line than a pricked finger. Although it’s never phrased this way, singularians have the balls when asked the question: “would you risk the continued existence of the entire human species if the the payoff would be your own eternity?” to actually answer “yes”.

As was pointed out in the Miller interview, the singularians have a preference for libertarian politics. This makes sense, not only from the knowledge that the center of the movement is the libertarian leaning Silicon Valley, but from the hyper-individualism that lies behind the goals of the movement.  Singularians have no interest in the social- self: the fate of any particular nation or community is not of much interest to immortals, after all. Nor do they show much concern about the state of the environment- how will the biosphere survive immortal humanity?, or the plight of the world’s poor- how will the poor not be literally left behind in the rapture of rich nerds? For true believers all of these questions will be answered by the super-intelligent immortal us that awaits on the other side of the event horizon.

There would likely be all sorts of unintended consequences from a singularity being achieved, and for people who do not believe in God they somehow take it on faith that everything will work out as it is supposed to and that this will be for the best like some
technological equivalent to Adam Smith’s “invisible hand”.

The fact that they are libertarian and hold little interest in wielding the power of the state is a good thing, but it also blinds the singularians to what they actually are- a political movement that seeks to define what the human future, in the very near term, will look like. Similar to most political movements of the day, they intend to reach their goals not through the painful process of debate, discussion, and compromise but by relentlessly pursuing their own agenda. Debate and compromise are unnecessary where the outcome is predetermined and the Singularity is falsely presented not as a choice but as fate.

And here is where the movement can be seen as potentially very dangerous indeed for it combines some of the worst millenarian features of religion, which has been the source of much fanaticism, with the most disruptive force we have ever had at our disposal- technological dynamism.  We have not seen anything like this since the ideologies that racked the last century. I am beginning to wonder if the entire transhumanist movement stems from a confusion of the individual with the social- something that was found with the secular ideologies- though in the case of transhumanism we have this in an individualistic form attached to the Platonic/Christian idea of the immortality of the individual.

Heaven help us if the singularian movement becomes mainstream without addressing its ethical blind spots and diminishing its hubris. Heaven help us doubly if the movement ever gains traction in a country without our libertarian traditions and weds itself to the collective power of the state.

 


Rick Searle, an Affiliate Scholar of the IEET, is a writer and educator living the very non-technological Amish country of central Pennsylvania along with his two young daughters. He is an adjunct professor of political science and history for Delaware Valley College and works for the PA Distance Learning Project.
Print Email permalink (5) Comments (4707) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


A few hundred thousand years of human evolution is arguably worth risking for billions of years of humanity’s future. Also, I’d be happy to informally discuss the ethics with you. The poor unfortunately will lag behind, but look at cellphones in developing nations; they will join us surprisingly quickly in these incredible new technologies. As for the biosphere, we place it on par with the thousands of other issues which can and will be solved by stronger minds. I, and some smarter folks than myself over at Less Wrong, would argue that the Singularity, done right, is ethically the best thing humanity has ever done. Hard not to be a little proud of fighting to make it happen. Party on!





This article is riddled with such simplistic and ridiculous caricatures of what singularitarianism entails that it’s hard to believe it was written by a sf author who is presumably familiar with the concept and movement. This is particularly puzzling when considering that he is published on IEET, the foremost proponent of democratic transhumanism (which recently published a poll finding roughly equal parts liberal and libertarian ideologies among transhumanists), yet asserts that singularitarians “are libertarians” uninterested in “the painful process of debate, discussion, and compromise,” ignoring countless organisations such as this with exactly those goals. Aside from a few fringe cases, they do not “take it on faith that everything will work out as it is supposed to.” Even the poster boy Kurzweil, always accused of unrealistic optimism, acknowledges the need for significant debate, that there will be problems, and only gives humanity a “better than even” chance of surviving. “Would you risk the continued existence of the entire human species if the the payoff would be your own eternity?” is not the right question to ask. These technologies are coming regardless of whether you feel they are worth the risk, and sticking your head in the sand because their implications make you uncomfortable is not going to help matters any.





Re singelarity:

Respectfully, the problem I see with the sigularians is precisely their willingness to risk not just their own but all of our futures in pursuit of their goals. Your ” few hundred thousand years of human evolution” is the life of not just myself, but my daughters, indeed all human generations in the future should the most dismal predictions of the Singularity that as evidenced in my post are often blithely brushed aside.  At some point in the future I would be more than glad to debate and discuss these issues with you. Perhaps it is something we could do over SKYPE?

RE: ShaGGGZ:

I am not sure how to square your statement that “These technologies are coming regardless of whether you feel they are worth the risk..” with your assertion that transhumanism is a democratic movement. Certainly all sorts of transhumanist’s goals-
such as neural implants for the physically healthy can, if we so choose, be subject to public regulations just as drugs or steroids are today. If I was “sticking my head in the sand” I would not be arguing that we need to have a serious public debate on these issues as I hope we are engaged in right now.





You can square those statements by recognizing that regardless of whether we develop these technologies through democratic means, the self-amplifying nature of the technologies and the game theoretic dynamics of our global civilization mean that it is very unlikely that these technologies will be suppressed. Also, I didn’t assert that transhumanism is a democratic movement, I rebutted your claim that it is a libertarian movement. It is a diverse movement and will only get moreso as it creeps ever further into the mainstream.





Thank you for your response.

I am crossing my fingers (and doing my small part) to help make transhumanism a more democratic movement in the sense that it should be open to dialogue with those who do not share its premises.

Above all, I would like us to move away from this false technological determinism where we are somehow fated to arrive at one destination or another. We make our choices and decide our future- at least for now.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Should we eliminate the human ability to feel pain?

Previous entry: Could Human Enhancement Turn Soldiers Into Weapons That Violate International Law? Yes

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
Williams 119, Trinity College, 300 Summit St., Hartford CT 06106 USA 
Email: director @ ieet.org     phone: 860-297-2376