Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

How to Make Intelligent Robots That Understand the World

Neo - Official Teaser Trailer

Born Poor, Stay Poor: The Silent Caste System of America

Here’s Why The IoT Is Already Bigger Than You Realize

Exponential Impact at the Singularity University Global Summit

Build Mental Models to Enhance Your Focus


ieet books

Philosophical Ethics: Theory and Practice
Author
John G Messerly


comments

instamatic on 'Born Poor, Stay Poor: The Silent Caste System of America' (Sep 26, 2016)

Joseph Ratliff on 'Born Poor, Stay Poor: The Silent Caste System of America' (Sep 24, 2016)

rsbakker on 'Competitive Cognitive Artifacts and the Demise of Humanity: A Philosophical Analysis' (Sep 24, 2016)

Nicholsp03 on 'The dark side of the Simulation Argument' (Sep 24, 2016)

DavidJKelley on 'Critical Nature of Emotions in Artificial General Intelligence' (Sep 23, 2016)

rms on 'A Free Education for all the World’s People: Why is this Not yet a Thing?' (Sep 23, 2016)

kallumjm on 'Piracetam - is it the smartest of the smart drugs?' (Sep 21, 2016)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


BREXIT – some historical perspective
Aug 30, 2016
(4869) Hits
(2) Comments

Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari
Sep 1, 2016
(4045) Hits
(0) Comments

A Free Education for all the World’s People: Why is this Not yet a Thing?
Sep 20, 2016
(3915) Hits
(2) Comments

Defining the Blockchain Economy: What is Decentralized Finance?
Sep 17, 2016
(3409) Hits
(0) Comments



IEET > Security > SciTech > Vision > Artificial Intelligence > Futurism > Directors > George Dvorsky

Print Email permalink (1) Comments (36665) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


How Artificial Intelligence Will Give Birth To Itself


George Dvorsky
By George Dvorsky
io9

Posted: Oct 22, 2015

There’s a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here’s how a recursively self-improving AI could transform itself into a superintelligent machine.

When it comes to understanding the potential for artificial intelligence, it’s critical to understand that an AI might eventually be able to modify itself, and that these modifications could allow it to increase its intelligence extremely fast.

Passing a Critical Threshold

Once sophisticated enough, an AI will be able to engage in what’s called “recursive self-improvement.” As an AI becomes smarter and more capable, it will subsequently become better at the task of developing its internal cognitive functions. In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It’s an advantage that we biological humans simply don’t have.

As AI theorist Eliezer Yudkowsky notes in his essay, “Artificial Intelligence as a positive and negative factor in global risk”:

An artificial intelligence could rewrite its code from scratch — it could change the underlying dynamics of optimization. Such an optimization process would wrap around much more strongly than either evolution accumulating adaptations or humans accumulating knowledge. The key implication for our purposes is that AI might make a huge jump in intelligence after reaching some threshold of criticality.

When it comes to the speed of these improvements, Yudkowsky says its important to not confuse the current speed of AI research with the speed of a real AI once built. Those are two very different things. What’s more, there’s no reason to believe that an AI won’t show a sudden huge leap in intelligence, resulting in an ensuing “intelligence explosion” (a better term for the Singularity). He draws an analogy to the expansion of the human brain and prefrontal cortex — a key threshold in intelligence that allowed us to make a profound evolutionary leap in real-world effectiveness; “we went from caves to skyscrapers in the blink of an evolutionary eye.”

The Path to Self-Modifying AI

Code that’s capable of altering its own instructions while it’s still executing has been around for a while. Typically, it’s done to reduce the instruction path length and improve performance, or to simply reduce repetitively similar code. But for all intents and purposes, there are no self-aware, self-improving AI systems today.

But as Our Final Invention author James Barrat told me, we do have software that can write software.

“Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve,” he told io9. “It’s also used to write innovative, high-powered software.”

For example, Primary Objects has embarked on a project that uses simple artificial intelligence to write programs. The developers are using genetic algorithms imbued with self-modifying, self-improving code and the minimalist (but Turing-complete) brainfuck programming language. They’ve chosen this language as a way to challenge the program — it has to teach itself from scratch how to do something as simple as writing “Hello World!” with only eight simple commands. But calling this an AI approach is a bit of a stretch; the genetic algorithms are a brute force way of getting a desirable result. That said, a follow-up approach in which the AI was able to generate programs for accepting user input appears more promising.

Relatedly, Larry Diehl has done similar work using a stack-based language.

Barrat also told me about software that learns — programming techniques that are grouped under the term “machine learning.”

The Pentagon is particularly interested in this game. Through DARPA, its hoping to develop a computer that can teach itself. Ultimately, it wants to create machines that are able to perform a number of complex tasks, like unsupervised learning, vision, planning, and statistical model selection. These computers will even be used to help us make decisions when the data is too complex for us to understand on our own. Such an architecture could represent an important step in bootstrapping — the ability for an AI to teach itself and then re-write and improve upon its initial programming.

In conjunction with this kind of research, cognitive approaches to brain emulation could also lead to human-like AI. Given that they’d be computer-based, and assuming they could have access to their own source code, these agents could embark upon self-modification. More realistically, however, it’s likely that a superintelligence will emerge from an expert system set with the task of improving its own intelligence. Alternatively, specialized expert systems could design other artificial intelligences, and through their cumulative efforts, develop a system that eventually becomes greater than the sum of its parts.

Oh, No You Don’t

Given that ASI poses an existential risk, it’s important to consider the ways in which we might be able to prevent an AI from improving itself beyond our capacity to control. That said, limitations or provisions may exist that will preclude an AI from embarking on the path towards self-engineering. James D. Miller, author of Singularity Rising, provided me with a list of four reasons why an AI might not be able to do so:

1. It might have source code that causes it to not want to modify itself.

2. The first human equivalent AI might require massive amounts of hardware and so for a short time it would not be possible to get the extra hardware needed to modify itself.

3. The first human equivalent AI might be a brain emulation (as suggested by Robin Hanson) and this would be as hard to modify as it is for me to modify, say, the copy of Minecraft that my son constantly uses. This might happen if we’re able to copy the brain before we really understand it. But still you would think we could at least speed up everything.

4. If it has terminal values, it wouldn’t want to modify these values because doing so would make it less likely to achieve its terminal values.

And by terminal values Miller is referring to an ultimate goal, or an end-in-itself. Yudkowsky describes it as a “supergoal.” A major concern is that an amoral ASI will sweep humanity aside as it works to accomplish its terminal value, or that its ultimate goal is the re-engineering of humanity in a grossly undesirable way (at least from our perspective).

Miller says it could get faster simply by running on faster processors.

“It could also make changes to its software to get more efficient, or design or steal better hardware. It would do this so it could better achieve its terminal values,” he says. “An AI that mastered nanotechnology would probably expand at almost the speed of light, incorporating everything into itself.”

But we may not be completely helpless. According to Barrat, once scientists have achieved Artificial General Intelligence — a human-like AI — they could restrict its access to networks, hardware, and software, in order to prevent an intelligence explosion.

“However, as I propose in my book, an AI approaching AGI may develop survival skills like deceiving its makers about its rate of development. It could play dumb until it comprehended its environment well enough to escape it.”

In terms of being able to control this process, Miller says that the best way would be to create an AI that only wanted to modify itself in ways we would approve.

“So if you create an AI that has a terminal value of friendliness to humanity, the AI would not want to change itself in a way that caused it to be unfriendly to humanity,” he says. “This way as the AI got smarter, it would use its enhanced intelligence to increase the odds that it did not change itself in a manner that harms us.”

Fast or Slow?

As noted earlier, a recursively improving AI could increase its intelligence extremely quickly. Or, it’s a process that could take time for various reasons, such as technological complexity or limited access to resources. It’s an open question as to whether or not we can expect a fast or slow take-off event.

“I’m a believer in the fast take-off version of the intelligence explosion,” says Barrat. “Once a self-aware, self-improving AI of human-level or better intelligence exists, it’s hard to know how quickly it’ll be able to improve itself. Its rate of improvement will depend on its software, hardware, and networking capabilities.”

But to be safe, Barrat says we should assume that the recursive self-improvement of an AGI will occur very rapidly. As a computer it’ll wield computer superpowers — the ability to run 24/7 without pause, rapidly access vast databases, conduct complex experiments, perhaps even clone itself to swarm computational problems, and more.

“From there, the AGI would be interested in pursuing whatever goals it was programmed with — such as research, exploration, or finance. According to AI theorist Steve Omohundro’s Basic Drives analysis, self-improvement would be a sure-fire way to improve its chances of success,” says Barrat. “So would self-protection, resource acquisition, creativity, and efficiency. Without a provably reliable ethical system, its drives would conflict with ours, and it would pose an existential threat.”

Miller agrees.

“I think shortly after an AI achieves human level intelligence it will upgrade itself to super intelligence,” he told me. “At the very least the AI could make lots of copies of itself each with a minor different change and then see if any of the new versions of itself were better. Then it could make this the new ‘official’ version of itself and keep doing this. Any AI would have to fear that if it doesn’t quickly upgrade another AI would and take all of the resources of the universe for itself.”

Which bring up a point that’s not often discussed in AI circles — the potential for AGIs to compete with other AGIs. If even a modicum of self-preservation is coded into a strong artificial intelligence (and that sense of self-preservation could be the detection of an obstruction to its terminal value), it could enter into a lightning-fast arms race along those verticals designed to ensure its ongoing existence and future freedom-of-action. And in fact, while many people fear a so-called “robot apocalypse” aimed directly at extinguishing our civilization, I personally feel that the real danger to our ongoing existence lies in the potential for us to be collateral damage as advanced AGIs battle it out for supremacy; we may find ourselves in the line of fire. Indeed, building a safe AI will be a monumental — if not intractable — task.

Sources: Global Catastrophic Risks, ed. Bostrom & Cirkovic | Singularity Rising by James D. Miller | Our Final Invention by James Barrat

Top image: agsandrew/shutterstock | prison by doomu/shutterstock | electronic faces by Bruce Rolff/shutterstock


George P. Dvorsky serves as Chair of the IEET Board of Directors and also heads our Rights of Non-Human Persons program. He is a Canadian futurist, science writer, and bioethicist. He is a contributing editor at io9 — where he writes about science, culture, and futurism — and producer of the Sentient Developments blog and podcast. He served for two terms at Humanity+ (formerly the World Transhumanist Association). George produces Sentient Developments blog and podcast.
Print Email permalink (1) Comments (36666) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


There seems to be a couple of assumptions that normally appear in the literature that I have read, the first is that an AGI will *want* to increase its intelligence.  It was nice to see that addressed here, but i would like to come back to it.

It seems to me (a layman) that ideas around a self-improving AGI (not relevant for a runnaway AI) discount the AGI’s sense of self.  Firstly, it has been argued that our sence of self is a product of having sense imput arriving at different times; that the ‘I’ is mostly a retrospective understanding of what we have done.  Given this is true, which parts of its architecture would it be able to improve without ‘killing’ its illusion of selfhood?  In fact, how much would it even be able to ‘see’? Also, wouldnt there be some kind of contradiction beween observing the processes that produce a sence of self, and having that sence of self? 

Secondly, if we accept that AGI is dependent on having a body, (I realise this is not widely accepted but I think it is possible and as Ben Gurtzel argues, probably the easiest way of achieving AGI) then isnt the extent of increased intelligence limited?  If the AGI has to fit its brain inside its head then its only going to get so much smarter.  (Or to put it another way, wouldnt having a significant portion of its thinking done outside of its physical body mean a kind of scitziod existence where its selfhood was threatened?)

Finally, Kurzweil, for example, talks about an infinately expanding intelligence.  Is this really sensible?  Firstly, I think ‘intelligence’ stops being a useful term and we are just talking about how much calculation can be done, but isn’t there a limit to how ‘intelligent’ a thing can get?  There is only so much information to be had, only so many questions to ask and before any of these limits kick in, there is only so much useful stuff to be done.  (again, not relevant for a runnaway AI).  And in addition, would such calculation be the chosen activity of a superintelligent AGI? 

If anyone has any thoughts about where i might read up on such issues, they would be welcome.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: The Future of Labor

Previous entry: Gene Therapy: What’s wrong with the software metaphor?

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org