Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Born Poor, Stay Poor: The Silent Caste System of America

Here’s Why The IoT Is Already Bigger Than You Realize

Exponential Impact at the Singularity University Global Summit

Build Mental Models to Enhance Your Focus

Competitive Cognitive Artifacts and the Demise of Humanity: A Philosophical Analysis

Take This Perception Test to See How Visually Intelligent You Are


ieet books

Philosophical Ethics: Theory and Practice
Author
John G Messerly


comments

Joseph Ratliff on 'Born Poor, Stay Poor: The Silent Caste System of America' (Sep 24, 2016)

rsbakker on 'Competitive Cognitive Artifacts and the Demise of Humanity: A Philosophical Analysis' (Sep 24, 2016)

Nicholsp03 on 'The dark side of the Simulation Argument' (Sep 24, 2016)

DavidJKelley on 'Critical Nature of Emotions in Artificial General Intelligence' (Sep 23, 2016)

rms on 'A Free Education for all the World’s People: Why is this Not yet a Thing?' (Sep 23, 2016)

kallumjm on 'Piracetam - is it the smartest of the smart drugs?' (Sep 21, 2016)

hankpellissier on 'A Free Education for all the World’s People: Why is this Not yet a Thing?' (Sep 21, 2016)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


BREXIT – some historical perspective
Aug 30, 2016
(4785) Hits
(2) Comments

Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari
Sep 1, 2016
(3985) Hits
(0) Comments

A Free Education for all the World’s People: Why is this Not yet a Thing?
Sep 20, 2016
(3869) Hits
(2) Comments

Defining the Blockchain Economy: What is Decentralized Finance?
Sep 17, 2016
(3378) Hits
(0) Comments



IEET > Security > Rights > Neuroethics > FreeThought > Personhood > Life > Innovation > Vision > Futurism > Technoprogressivism > Directors > Giulio Prisco

Print Email permalink (6) Comments (4774) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Prof. Hawking, the AIs will BE US


Giulio Prisco
By Giulio Prisco
Ethical Technology

Posted: May 8, 2014

Perhaps, as Prof. Stephen Hawking thinks, it may be difficult to “control” Artificial Intelligence (AI) in the long term. But perhaps we shouldn’t “control” the long-term development of AI, because that would be like preventing a child from becoming an adult, and that child is you.

“Success in creating [Artificial Intelligence] AI would be the biggest event in human history,” say Stehpen HawkingStuart RussellMax Tegmark, and Frank Wilczek, in an article published on The Independent. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

“Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains,” continue the scientists. “[A]s Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a ‘singularity‘ and Johnny Depp’s movie character calls ‘transcendence‘.”

I totally agree. What Good said in 1965 is:

“It is more probable than not [that] an ultraintelligent machine will be built and that it will be the last invention that man need make, since it will lead to an ‘intelligence explosion.’ This will transform society in an unimaginable way.” (Irving GoodSpeculations Concerning the First Ultraintelligent Machine, Advances in Computers, 1965)

The scientists emphasize the last part of the quote – AI will transform society in an unimaginable way: “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” and conclude that “all of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.”

I agree so far. But Hawking and the other scientists worry that “the long-term impact [of AI] depends on whether it can be controlled at all.” I wish to respectfully suggest that perhaps the long-term development of AI should not be controlled at all, because “controlling” it would be like preventing a child from becoming an adult, and that child is you.

​A real AI, one who thinks and feels like a person, perhaps much smarter than you and I, is a person. Controlling persons is bad.

OK, sometimes controlling some persons is the only way to protect many other persons – we don’t let condemned serial killers walk in the street. But control – forcing people to comply and obey, forcing others to do what they don’t want to do, reducing the freedom of peaceful persons who don’t wish to harm others – is BAD, no caveats. It can only be accepted as a necessary evil when it’s the only way to protect others.

Is this the case with real, strong AIs (human or more-than-human level minds)? Is controlling them the only way to protect us? But wait a sec, who is “them,” and who is “us”?

In Human Purpose and Transhuman Potential: A Cosmic Vision for Our Future Evolution, Ted Chu argues that we need a “Cosmic View” – a new, heroic cosmic faith for the post-human era. Chu believes that we should create a new wave of “Cosmic Beings,” artificial intelligences and synthetic life forms, and pass the baton of cosmic evolution to them. The Cosmic Beings will move to the stars and ignite the universe with hyper-intelligent life. Creating our successors isn’t betraying humanity and nature but, on the contrary, a necessary continuation of our evolutionary journey and an act of deep respect, to the point of “extreme worship,” for humanity, evolution, and nature.

Post-biological Cosmic Beings, our awesome AI mind children, will represent the next self-directed phase of our cosmic evolution. Stephen Hawking himself understands that the human species has entered a new stage of evolution. He thinks that we will probably reach out to the stars and colonize other planets. But this will be done, he believes, with intelligent machines based on mechanical and electronic components, rather than macromolecules, which could eventually replace DNA based life, just as DNA may have replaced an earlier form of life.

Paul Davies, a British-born theoretical physicist, cosmologist, astrobiologist and Director of the Beyond Center for Fundamental Concepts in Science and Co-Director of the Cosmology Initiative at Arizona State University, says in his book The Eerie Silence that any aliens exploring the universe will be AI-empowered machines. Not only are machines better able to endure extended exposure to the conditions of space, but they have the potential to develop intelligence far beyond the capacity of the human brain.

“I think it very likely – in fact inevitable – that biological intelligence is only a transitory phenomenon, a fleeting phase in the evolution of the universe. If we ever encounter extraterrestrial intelligence, I believe it is overwhelmingly likely to be post-biological in nature.” (Paul Davies)

If post-biological life is the next phase of our evolution, then there is no “us” humans vs. “them” AI – there is only us. We owe our AI mind children the same moral regard that we accord to organic humans, and therefore we will have to let them develop their full potential. Not that we will have a choice – as soon as the AIs become smarter than us, they will be able to avoid our “parental control,” just like our children do when they become smarter than us.

Some fear that AIs will take over and exterminate old-style humans1.0. Hugo de Garisthinks that Artilects, super-human AIs, “once they become hugely superior to human beings, may begin to see us as grossly inferior pests and decide to wipe us out.” (see The first Terran shots against the Cosmists).

But I am persuaded that the AIs will feel no hostility toward old-style humans. The universe is a big place, and they will have other things to do. I am sure that the AIs will be perfectly happy to leave the solar system to old-style humans and move to the stars.

I imagine a co-evolution of humanity and technology, with humans enhanced by synthetic biology and artificial intelligence, and artificial life powered by mind grafts from human uploads, blending more and more until it will be impossible – and pointless – to tell which is which. Like children retain their fundamental identity after growing up and becoming adults, we will grow into post-biological life. We don’t need to fear an AI takeover, because the AIs will be ourselves.

Images:
http://www.britannica.com/EBchecked/media/110521/Engraving-of-a-slave-auction-at-Richmond-Va
http://www.ilo.org/global/lang--en/index.htm


Giulio Prisco is a writer, technology expert, futurist and transhumanist. A former manager in European science and technology centers, he writes and speaks on a wide range of topics, including science, information technology, emerging technologies, virtual worlds, space exploration and future studies. He serves as President of the Italian Transhumanist Association.
Print Email permalink (6) Comments (4775) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Great article Giulio!  And I agree with you. Wonderful vision of the future





Thanks Hank! Growing up and becoming adult is difficult, and often painful, but everyone must become adult eventually.

We are beginning an important growing up phase, and merging with AI will be part of that.





Hi Guilio: interesting perspective, thank you. As a technologist and humanist, I have a considerably different take on this subject.

First off, I don’t think what Prof. Hawking refers to in his recent commentaries are actually about control, but about what a convergence really looks like, and how the moral implications play out. If the Vinge/Kurzweil-driven idea of Singularity is to ‘biotransform’, the root questions seem to be ‘from what’? and ‘to what’? Granted, this world (and the universe) is full of unknown unknowns, but nevertheless, the human side of the equation must have a better grasp on this reality from the perspective of consciousness.

Second, Singularity suggests an augmentation of consciousness along the lines of what you describe. The core questions I have for that is: Why augment? Is this to say that our problems can’t be solved and our collective conscience can’t evolve any other way, or that it is somehow stymied?

You stated: “We owe our AI mind children the same moral regard that we accord to organic humans.” Really? Why? How so? Do they have souls? Do they develop emotive cognition or sentience in the same way? Better?

The transhumanist approach has many intriguing positions on a convergence of biology and technology, but it does not meaningfully address the core of the human condition or its respective challenges, and a ‘trancendence’ is not really about assisting humans through artificial means, but through empathy and a relatedness that happens at a very fundamental, spiritual level. I have yet to see any AI do that on its own, and to assume otherwise seems precarious…





@Giulio:

“I wish to respectfully suggest that perhaps the long-term development of AI should not be controlled at all, because “controlling” it would be like preventing a child from becoming an adult, and that child is you.”

“A real AI, one who thinks and feels like a person, perhaps much smarter than you and I, is a person. Controlling persons is bad.”

I think we have to be conscious of a couple of the things. The first is where any truly sentient AI is likely to first emerge. My bet is that it would be created either as a tool of one of the world’s more advanced militaries or by a corporation. The more realistic and eminent seeming the emergence of such an AI becomes, and I am not convinced we are anywhere near that point, the more important the question of WHERE that intelligence emerges, what powers it has, to what purpose it has been designed etc. Controlling persons is not always bad. It’s something we need to do when that person is dangerous sometimes a person is deemed so dangerous we go so far as to “switch” him “off.” 

I am all for the eventual development of sentient AI. To create such beings would constitute a sort of miracle, and I agree with you Giulio that it would open up brand new and a much more expansive environment for “life”. Yet, we need to be patient and careful. We have millennia in front of us to make such a move. If you’d ask me about Fermi’s paradox on many days my guess would be that biological life elsewhere moved too fast in such transitions and the AI imploded as the biological and social forms
that gave rise to it collapsed. 

When I look for biological analogies to the future of machines the one I think most informative is the “Oxygen Catastrophe” 2.3 billion years ago. Our machines might be like the Cynobacteria of that era pumping oxygen which was then poisonous to the rest of life into the atmosphere until almost the whole biosphere collapsed.

Perhaps the end of biological life would signal the beginning of a new era of complexity in the evolution of life as the Oxygen Catastrophe did, but with no other examples of intelligence in the universe so far we can not be sure. If we are not careful we risk the whole of our evolutionary history including ourselves. 

“But I am persuaded that the AIs will feel no hostility toward old-style humans. The universe is a big place, and they will have other things to do. I am sure that the AIs will be perfectly happy to leave the solar system to old-style humans and move to the stars.”

The problem is the stars are VERY far away even for super-intelligent AI. If they are at all like biological life, including ourselves, they would strip what is closest to them for all the energy and materials they could obtain before setting out for acquisitions much farther off.

All these risks make it clear to me that we will have to control our intelligent creations, at least for a time, as any good parent aims to guide and educate their children so they respect and cherish the larger world.





@Rick re “Why augment?”

Because that’s what we do. We want more, and that’s what got us here from the caves, and will get us to the stars.

re “we will have to control our intelligent creations, at least for a time, as any good parent aims to guide and educate their children so they respect and cherish the larger world.”

Most parents try to do that, indeed, but only those who respect the autonomy and self-ownership of their children succeed. Otherwise the children, once they reject the parents’ authority (like all smart people do at some point), will reject also all their principles and education.

All smart parents know that the time will come when their children are autonomous and independent, and plan for that.





@Giulio:

re “Because that’s what we do. We want more, and that’s what got us here from the caves, and will get us to the stars.”

I completely agree. Although all past augmentation has been a way to leverage our biological self with the ultimate intention being to help and sustain that biological self. What we are entering is something new whose outcome we can’t be certain of. When you walk on lake right after it has become frozen you test the ice to make sure the conditions are right. The further you move from a point of known safety the slower you try to move.

re “All smart parents know that the time will come when their children are autonomous and independent, and plan for that.”

I agree with this as well. But these AI are not even toddlers yet.
When children are toddlers you lock up poisons and don’t keep knives laying around in the open. There are also some adults that never grow up and are always dangerous - society has means of keeping them from being dangerous- in both cases it is wise to have some instruments of not so much control, but protection.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: A Problem of Concepts (Part 2 of 2)

Previous entry: Too Titillating For Twitter: Why Outsourcing Social Media Participation Is Disconcerting

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org