IEET > GlobalDemocracySecurity > Vision > Staff > Kyle Munkittrick > Futurism > Cyber > SciTech
Why I’m Not Afraid of the Singularity
Kyle Munkittrick   Jan 21, 2011   Science Not Fiction  

I have a confession. I used to be all about the Singularity. I thought it was inevitable. Now I’m not so sure.

I used to be all about the Singularity. I thought for certain that some sort of Terminator / HAL-9000 scenario would happen when ECHELON achieved sentience. I was sure The Second Renaissance from the Animatrix was a fairly accurate depiction of how things would go down. We’d make smart robots, we’d treat them poorly, they’d rebel and slaughter humanity.

Now I’m not so sure. I have big, gloomy doubts about the Singularity.

Michael Anissimov tries to re-stoke the flames of fear over at Accelerating Future with his blog post “Yes, The Singularity is the Single Biggest Threat to Humanity”:

Combine the non-obvious complexity of common sense morality with great power and you have an immense problem. Advanced AIs will be able to copy themselves onto any available computers, stay awake 24/7, improve their own designs, develop automated and parallelized experimental cycles that far exceed the capabilities of human scientists, and develop self-replicating technologies such as artificially photosynthetic flowers, molecular nanotechnology, modular robotics, machines that draw carbon from the air to build carbon robots, and the like.

It’s hard to imagine what an advanced AGI would think of, because the first really advanced AGI will be superintelligent, and be able to imagine things that we can’t. It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there’s a lot of evidence that we aren’t.


Anissimov continues:

Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend.

There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand.

If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.

Oh my stars, that does sound threatening.

But again, those weird, nagging doubts linger in the back of my mind. For a while, I couldn’t place my finger on the problem, until I re-read Anissimov’s post and realized that my disbelief flared up every time I read something about AGI doing something.

AGI will remove the atmosphere. Really? How? This article - and in fact, all arguments about the danger of the Singularity - necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not.


Kyle Munkittrick, IEET Program Director: Envisioning the Future, is a recent graduate of New York University, where he received his Master's in bioethics and critical theory.
Nicole Sallak Anderson is a Computer Science graduate from Purdue University. She developed encryption and network security software, which inspired the eHuman Trilogy—both eHuman Dawn and eHuman Deception are available at Amazon, the third installment is expected in early 2016. She is a member of the advisory board for the Lifeboat Foundation and the Institute for Ethics and Emerging Technologies.


My Favorite response to the idea of overrunning AI: EMP’s Strategically detonated all over in such a way as to wipe the computer mainframes.  It’d take care of it in a heartbeat.  Granted it’d cripple all other electronic means for a while but it’s an excellent emergency measure.  Do the panic mongers ever contemplate rational options? Seriously.

You’re missing a few things here Kyle.

Presumably, an AI could hold us hostage by putting it’s finger on the killswitch of all our systems controlled by computers. Every machine that is run by electronics could be taken from us.

And this: once it has control, it can use our manufacturing to produce any physical form it needs. Robots building robots? Already happens in car factories. So, an AI takes control of a factory. It then produces a robotic body for it’s needs (including, I suppose, more computing substrate to make itself more powerful).

It takes control of all our military weapons that are connected to or use electronics in any fashion. It just hacks into them wirelessly.

Actually, I don’t think things will go down that way, I just don’t see you raising any good points against it.

Of course a recursively self-improving AI could do these things, all of civilizations sub-systems are already dependent upon computing, and it would be a small thing for an AI to hack into each and every one of them.

What you don’t know won’t hurt you, don’t worry Kyle. She’ll be right mate, no need to even consider any threat of AGI.
Just because you can’t see any way that an AI could interact with the world around it, does not mean that it is so.  And I find that this article is heavily biased with a shallow view on how much intelligent systems interact with the outside world today.  To me it seems that robotics have been replacing human hands (labor) for quite some time, and I can’t see reasons for this trend to stop. 
Kyle’s writings seem to present a false dichotomy, in that AGI could only present any threat if it had full unbridled access to the outside world, otherwise there is no threat.

For a plausible scenario that could lead to the singularity read ‘Rainbows End’ by Vinge.

Even if a synthetic intelligence had no way of interacting with the physical world (and that is certainly an if) your suggestion that it wouldn’t be able to affect is downright ludicrous.  You mention an SI causing havoc on our communication networks and then brush it off as if it were nothing.  I would have hoped that a person who writes blogs on the internet for a living would have a little more respect for the way that communications technology has become the bedrock of our society and economy.

Let’s take agriculture as an example.  Nowadays most food is produced far away from its point of consumption.  It takes a large and powerful infrastructure to ensure that food gets from the farm to where it needs to be, often thousands of miles away.  You really think that an SI couldn’t be dangerous if it managed to disrupt our ability to coordinate something as simple as moving food?  How long to you think it would take for cities to turn into battlegrounds?

Want a more relevant example of a computer wreaking havoc in the real world?  How about the flash-crash in the stock market last year?  Wall Street algorithms caused a seven hundred point drop in the Dow Jones in a matter of minutes.  They weren’t malicious (just doing what they were programmed to do) and they sure as hell couldn’t interact with the physical world yet they still managed to send people into a panic if only for a few moments.  As we give more and more control to machines who can really predict the next crash won’t be worse.

Now I realize that what I’m saying sounds somewhat apocalyptic and I don’t think that such scenarios are necessarily likely.  But to brush off the danger as you’re doing makes even less sense than some of the more paranoid ramblings I’ve seen on the threat of machine intelligence.  Remember it won’t have to be malignant to be dangerous.

While I share skepticism about Yudkowsky’s insistence that an AGI could mind-control any human through a text-only interface, the assertion that it would necessarily remain locked up strikes me as even more absurd. We have little way of knowing whether a trapped superintelligence could escape. However, unless FAI theory wins, I doubt folks will so much as try to contain developing AGI. They’ll want applications and results as soon as possible. I already see the process of automation all around me. The smarter AI gets, the more integrated into the material world it will become.

I don’t believe that robots would be able to become more intelligent than the humans who create them. Robots will also only be as evil as the human to invent them to be so. If robots should become destructive, it would be due to a malfunction in programming, but not due to their AI.  People are afraid of robots taking over the world and killing off humanity, but aren’t some humans today already guilty of such terrible crimes and destruction? Aren’t humans today already capable of killing of populations with a push of a button. Superior technology, at best, would only deplete our natural resources and increase our level of greed and dependence on unsustainable technology. That is what I fear about superior technology.

“I have a confession. I used to be all about the Singularity. I thought it was inevitable. I thought for certain that some sort of Terminator/HAL9000 scenario would happen when ECHELON achieved sentience.”

Wow, you used to be a really… strange person, to put it politely. But I hope you wouldn’t conflate the crazy things you’ve managed to believe with what other people are saying regarding the Singularity.

“The article, in fact, all arguments about the danger of the Singularity necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not.”

What? You believe that no matter how widespread AGI technology eventually becomes and how many people eventually build AGIs, *no one* will ever, by mistake or by purpose, release his/her AGI?

After that assertion of yours, it doesn’t seem necessary to comment on the other weird things you said…

Summerspeaker wrote:

“While I share skepticism about Yudkowsky’s insistence that an AGI could mind-control any human through a text-only interface [...]”

Summerspeaker, it isn’t very polite to spread false information regarding what other people have said. I recommend you use direct quotes instead of trying to paraphrase.

@ Aleksei, thank you for taking a professional and academic tone with your criticism. Additionally, I enjoy that your counter-points to my argument consist of an ad hominem and rhetorical questions that decontextualize my argument.

To the others, I am grouping your positions, as many of you make overlapping points. To refute you, I make the following three points.

1) My references to Terminator and 2001 were fitting for the site, and my reference to ECHELON is what is known as tongue-in-cheek. I don’t know if Bayesian analysis recognizes humor, but I assure you that most readers recognize the gist of my summary: Singularitarians are jejune.

2) (@Adam A Ford, Matt Brown, Summerspeaker, Aleksei) Anissimov’s argument is predicated upon a hard take-off scenario. My rebuttal is that a hard take-off is not likely because it requires total economic integration to be a genuine apocalyptic threat. Yes, communication damage would be significant and traumatic, but not apocalyptic. My position was not that the Singularity would be harmless, but that one cannot go from simple AGI+ to global human annihilation.

3) The reason a hard take-off would not work is because AGI does not have access to three things: a) resources; b) independent self-controlled networked appendages; c) self-power. For a) think of mining and harvesting. Iron ore, gold, lithium, diamonds, and oil are some of the thousands of substances necessary to create modern technology. Yet the process to retrieve these substances is still very analog, human labor intensive, and occurs over vast distances. Even an AGI controlled factory needs resources. For b) think of all the technology we have that, while computer enabled, cannot function without a human component. A simple example is a car. There is no auto-pilot built into a car. A car lacks sufficient sensors, motor-articulators (i.e. hydraulics to steer w/o a human), and range (who’s going to fill the tank) to be a useful AGI tool. Most of the technology we possess is equally limited. Our world is designed for human use, not automation. For c) think about electricity. AGI needs electricity. Dams, coal factories, nuclear generators, and even wind and solar farms can be isolated from the grid. Furthermore, without maintenance, would fall rapidly into disrepair. Without power, AGI dies, because an AGI cannot feed itself.

My point, limited though it may be, is that a hard take-off, the scary scenario of the Singularity, is practically impossible. The much, much further off scenarios of a super-automated world filled with networked robots do pose a different kind of threat, that I concede. However, considering the most impressive robot we currently have is Big Dog, and our greatest achievements with A.I. can’t even replicate a cockroach, I submit we have a long, long way to go before the later scenarios become a relevant concern.

Oh well, apparently I am commenting on a further thing in this article:

“Even if it escaped and “infected” every other system [...], the A.I. would still not have any access to physical reality. [...] In short: any super AGI that comes along is going to need some helping hands out in the world to do its dirty work.”

And why do you think a superintelligent AI “loose on the internet” couldn’t e.g. (1) make lots of money even though it doesn’t have a physical body,  and (2) pay people to do various things for it in the real world with that money?

Heck, even *I* manage to make notable amounts of money just by sitting in front of a computer, and not requiring a body. I could also do so even if I masked and faked my real-world identity completely.

And of course people can be bought to do essentially anything online. Fooled too, some people, though that wouldn’t really even be necessary. A loose superintelligence could eventually have people building it nanotechnological robot bodies or whatever it has managed to invent.

Kyle Munkittrick wrote:

“Anissimov’s argument is predicated upon a hard take-off scenario.”

No, it isn’t. Anissimov’s article made no claims regarding the speed with which a hypothetical AGI would reach a position of great capability.

It doesn’t matter if it takes days or years. If it can do it at all, it is a great threat if it’s goals don’t align with ours.

@ Kyle..

Not sure if your article is intending to inspire a more positive view for any AGI singularity here? And I would propose that there would be no real threat “necessarily” unless some universal AGI feared from us for it’s own survival in the first place?

However, you seem to propose some kind of “transformers” apocalypse and “hardware-takeoff”, whereas the real threat would be software AGI and direct disruptions to comms and networks, which is more than simply enough to drive humanity into chaos, fear, conflict, poverty, starvation and war? Game over - begin again?

What would drive a prolific and successful AGI to such actions, knowing that “it is itself reliant” upon human systems for survival? In a word FEAR! So the root cause and solution to any problem is clear and apparent?

Okay, let me re-iterate something I said over on H+ and Accelerating Future:

A.I. could of course develop at any point in the future, and while it will speed things up immensely, I do not see it developing at a recursively exponential rate due to hardware constraints.This will lead to AI developing in stages along with the rapidly increasing ability to design faster, more efficient processors. As these technologies will be enabling human enhancements as well, I expect human and A.I. development to run in parallel.

While I am always willing to admit that this prediction could easily be proven incorrect, I don’t really see recursively exploding AI as quite the danger that some other threats could be, primarily because there is a finite limit to the speed and number of processors such an AI could run on, therefore it does have a limiting factor. An AI could in theory advance to the limits of what would be possible within the existing infrastructure, but without a secondary technology like fully mature nanotech (or fully automated raw materials to finished product manufacturing), I do not see it as possible for the “runaway” explosion effect in which the software and the hardware advance simultaneously. Even if the AI CAN design better processors, it must also be able to make them, install them, and create infrastructure to support them.

I am fully aware that numerous factors could indeed render that limit completely null, which is why I fully support every caution being taken with AI, but as things currently stand, an “immediate” AI explosion is unlikely, but grows higher the farther up the “curve” of technology we get.

One mitigating factor that I see is that as soon as any “AI” advancement is made, it is incorporated into “human intelligence” enhancement, and while that in and of itself is insufficient to eradicate the dangers, it does lead me to believe that as we grow closer and closer to true AI (barring breakthroughs which are not predictable) we will also become more and more capable of designing friendly AI.

It’s always going to be a potential existential threat, but we will hopefully be better able to prevent it from being one as we ourselves improve.

After your last reply, Kyle, I understand your position a little better.

Here’s my take on it: a hard take off is completely unpredictable.

Currently, I feel that Ray Kurzweil’s timeframe represents a “firm” take off, while the book Accelerando should represent a “soft” take off.

A “hard” take off could happen at any point between now and then.

It really can’t be predicted. We vaguely know what it takes to force a hard take off, but to my knowledge there is no sure way to prevent it, if it’s going to happen. The main problem being, that any attempt to clamp down on those things that might create a hard take off will simply force them underground, and there are too many places around the world where the seed of a hard take off could find purchase.
In other words, it is utterly beyond our human resources to police every square centimeter of this planet where people are often trying to create AGI in their homes, their offices, their labs.
So in essence, either a “hard” take off will happen, or it won’t. The only probabilistic thing we can say about it, is that as technology in general accelerates, the odds might increase the closer we get to any kind of take off.

However, for the record, I agree that a hard take off is unlikely, I find RK’s timeframe to be the most realistic, though I feel he is slightly conservative in his estimates (it’s possible we could have a firm take off by as much as a decade earlier - but it’s equally likely it could be delayed another decade too - so many variables).
I also agree that Anissimov plays the paranoid card too much.

Let’s consider also that a take off point is relative to your current position in time, distance from the singularity, and how many incremental steps remain before take off.

For arguments sake, let’s say that the singularity occurs 50 years from now.
From our current position, we might call that a “firm” take off.
Now, 49 years later, when humanity is 1 year away from take off, if you were to ask the people alive then if the take off was “firm” or “hard”, they might say it’s “hard” simply because it is merely 1 year away for them. What would people say when they are 10 minutes away? Would it be a “hard” or “firm” take off from that perspective?
What if we were to ask people 20 years ago the same question?

I do agree, Kyle, but we have to do more from our part.

I would look instead toward an internal organization for our species that stored and administered the destiny of our personal DNA aggressively. Cloning may be the key. In other words, play this singularity game on offence.

As an inclusive Humanist. I’m nonetheless hoping for a hardball outfit like the historical Jesuits, de-deified, and working at last on what they were destined to do - us.

Embrace the species, have that inclusive sensibility for our planet, species and lives, and let’s see what we can set up here on Earth and Venus over 1000 summers, using our institutions, especially the UN.

The Universe is wallpaper until then, and we have work to do.

I stand by what I typed, Aleksei. See “The AI-Box Experiment” on Yudkowsky’s website: “If it thinks both faster and better than a human, it can probably take over a human mind through a text-only terminal.” I find the notion interesting but speculative.

As for hard takeoff, presumably a dedicated paperclip maximizer would develop the appropriate production technology before doing anything as drastic as disrupting the communications network. Timelines vary, but molecular manufacturing within a decade strikes me as plausible if unlikely. Is anyone predicting global annihilation before 2050 or so?


Two comments.  First, your argument is actually an argument about the initial stages of a singularity, because (as someone else pointed out, in one of the other comments above), although we could avoid the immediate disaster of an unfriendly AI by ensuring that it started with no ability to act on the world, it is doubtful that that situation could be sustained indefinitely.  The goal would be for us to act during that initial phase to ensure that nothing bad eventually happened when (inevitably) someone gave it some hands and eyes.

My second point, though, is that the only thing that matters, when trying to judge the danger, is the motivation of the AI.  What does it want to do, and where did it get that particular set of motives?  The problem with Anissimov’s argument (and countless others in similar vein) is that he makes assumptions about the machine’s motivation that are tremendously simplistic.  Worse than simplistic:  the assumptions are strongly asserted, as if they are self-evidently true, when in fact they are based on speculation and a narrow interpretation of how an AI would be structured.

Put simply, AI motivation is not something you discover AFTER you build an AI, it is something you choose when you design the AI.  If you are really dumb, you design the AI so badly that you give it a random drive mechanism, or a wildly unstable one.  If you think about the drive mechanism at all, you do something to ensure that the AI is motivated properly, and with a stable mechanism.

I find it astonishing that discussions about what an AI “would do” are so often based on the assumption that it will soon start to do this, that or the other dangerous things ..... but the person who makes that assumption does not seem to be basing it on any particular mechanism that drives the AI!

I will write more about this question shortly.  It is a huge topic, and gravely neglected by the singularity and AGI community.


Don’t you realize that the “quote” you presented is actually not from Yudkowsky?

He has written the page on which those words are found, but what you are quoting from is a *fictional dialogue between two hypothetical people*.

When you read e.g. Plato’s fictional dialogues written to illustrate a point, do you then also assume that all that the fictional characters state are opinions of the author?

(No, you can’t assume that a particular fictional character is meant as a perfect representation of the author even if that character is obviously closer to the author’s point of view than another character.)

I had the same kind of transition
When I first read Technological Singularity, I was excited—my mind was totally blown away by its implications.

Now, after reading all the valid criticism of singularity, my excitement is more calibrated. I get more of the feeling that—if it happens, great! If it doesn’t happen, well my life isn’t diminished by it.

Still 30 years to go! In the meantime there’s all the fun and pain of pre-singularity life you gotta deal with smile

btw. guys I wrote a blog post detailing difference between pain and suffering—if anybody interested, check it out! It’ll be illuminating for conversation about secular spirituality I promise ya smile

I find the assumption that an AI or AGI would come out and destroy the world, relatively laughable. As all apocalypse stories go, it leaves the assumption that we are going to design a weapon we can’t control. It is implausible for a few reasons.

1) It assumes that an AI would have an intent that runs directly against our own. This is improbable, and a global human extinction would be a rather illogical. If we created such an AI, I can’t think of a reason it would desire global human extinction. Even if you assume it was created, with monstrous abilities, why would it want to dominate instead of co-exist? I assume, that an AI desires would in any form be self serving, and I would imagine that trading with a few high powered individuals would be much more efficient than forcible insurrection of the 5 billion so humans.

2) It assumes that we design AI without moral aptitudes. This is also highly unlikely. We should generally assume that AI’s initial development will be for commercial endeavor and not computer science wizardry. If a designer is intending to develop an AI as an enterprise, the single most important aspect of that product would be user interface. If we want an AI, we want an AI which will make decisions that fall in line with what we find acceptable. An AI without a moderate moral aptitude, would fail in many ways to be user friendly. It would be unnerving. So when I see end world scenarios caused by a stoic, or worse, an actively malicious AI it strikes me as funny. Why would a designer want to make an AI with which he could have no interaction. There is a reason we don’t hang out with sociopaths, and psychopaths. It isn’t enjoyable, and a product that is unpleasant to use, is an unpopular product. My assumption, is should a company design an AI, who displayed disregard for human life, or Amorality would be immediately scrapped, and any AI who was developed with the “that is very logical” interface would be scrapped too. presumable we would want a vocal interface with many AIs, but even if it was only text based, I want any intelligence to understand me contextually, and sub contextually, and lacking that ability would make that AI an inferior product.

3) It assumes that the AI is alone. Someone early wrote that an AI would need human helpers, that is true, but I really mean, have you ever noticed that every apocalypse scenario assumes a singular consensus. There isn’t disagreements between AI, because we assume that there is only 1 real AI, the big bad red eyed murder machine. But I can’t imagine the corporate take over scenario of AI would be realistic. Big and small companies will start developing AI soon after the first successful AI. Within 10 years of initial development there will be at least 100 functional AI models developed for different purposes, and one would assume that functional AI will be developed by intensely by three competing forces (Defence, Academia, and video games). I don’t expect the industrial military complex to lose out in the competition of who will be able to develop powerful AI, and I don’t expect that only one country will develop that technology. In a future, where a multitude of different AIs, with different motivations, and different controllers, how would one come to rule them all. This is real life, not middle earth. No one AI will dominate them all, because we will keep building new and better ones, that what we do.

Richard Loosemore,

Mostly you are saying the very same thing Anissimov was, while claiming you’d be disagreeing with him…

You both are saying that a superhuman AI will need to have such motivation/values that it doesn’t end up in conflict with our interests (since a conflict would mean an eventual loss for us).

The only point on which you really seem to differ here is that you probably think that building an AGI so that it’s guaranteed to take the best interests of us weaker lifeforms into account is easier than Anissimov thinks.

(You might also think that it’s more common for AGI enthusiasts to be wise enough to do whatever you think is necessary to ensure safety.)

Gynn wrote:

“Even if you assume it was created, with monstrous abilities, why would it want to dominate instead of co-exist? I assume, that an AI desires would in any form be self serving, and I would imagine that trading with a few high powered individuals would be much more efficient than forcible insurrection of the 5 billion so humans.”

What do you see us humans doing with regard to the gorillas and other decidedly weaker lifeforms? Do we trade with them? Or do we destroy their habitat if there are resources that they are using that we would be better at using to further our own interests than they are?

Have we in fact already driven to extinction many weaker lifeforms, instead of trading with them?

“An AI without a moderate moral aptitude, would fail in many ways to be user friendly.”

Only if it is also very stupid. Many humans also seem pleasant when it suits them, while in reality they don’t give a damn about very many other lifeforms, especially those that are weaker.

Many humans even kill and eat weaker sentient animals, while seeming like pleasant folks to those entities that currently are somewhat equal in power. And if they gain more power, often they start being nasty even towards those they earlier were pleasant towards.

“This is real life, not middle earth. No one AI will dominate them all, because we will keep building new and better ones, that what we do.”

Just like it wasn’t possible for one world power to develop nukes before others did, and win all conflicts it’s currently involved with?

(Nuking all others who then try to develop nukes of their own would also have been a possibility, if the first nuke-capable power had been nasty enough, and if the production of multiple copies of nukes had been simple enough.)


There certainly is a part of me pulling to say that humans are just so useful that robots wouldn’t depose of us, but that’s not it. There is very little reason that an AI would ever need our destruction. If it needed self advancement that is something it can do through our assistance far easier than without our co-operation. No entity existing today could successfully destroy all of mankind without destroying itself in the process, given our current level of global domination. And without a reliable supply chain, and regularly maintained, an AI is meaningless.

“Only if it is also very stupid. Many humans also seem pleasant when it suits them, while in reality they don’t give a damn about very many other lifeforms, especially those that are weaker.”

By this you are assuming that the mastery of manipulating a human would be somehow naturally evident to an AI construct. That manipulation is somehow a process of higher analytical function, and not a complex relationship of understanding human communication.

On top of that, you also pointed out the other argument for me, which is the “many”. Why do you assume that all AI would be complacent in the schemes of one AI. Certainly there would be a vast variety of models created, and at least a few would be designed specifically for human protection. would these not be able to combat our newly found digital oppressor. While I do not doubt that a malicious AI will one day rise, the idea that the first AI will be malicious, and will not be contested by separate models of AI is silly.

Yes, it is possible for a country to develop a weapons program faster than another, but this isn’t the 50s any more, and there is more that one group making anything at any time. As far as we go into the future, there will always be something that will threaten our well being. But tolerating the poor conditions we are currently in, out of fear that future conditions could be bad to hypothetically, is on the path to becoming a Luddite.


On the contrary, there are a multitude of points in Michael’s essay where he makes assumptions about the motivations, the rate of change of motivations, or the difficulty of building certain motivations—all of these assumptions being (as far as anyone can tell) based on an unspoken notion of what the motivation mechanism actually is.

There are too many examples to list all of them, but for starters:

> Why will advanced AGI be so hard to get right? Because what
> we regard as “common sense” morality, “fairness”, and “decency”
> are all extremely complex and non-intuitive to minds in general,
> even if they seem completely obvious to us.

Minds in general?  Which minds?  My mind (having studied cognitive psychology for decades, and being also an AGI researcher, I take exception to his statement that these mechanisms seem extremely complex).  This statement is just a statement of ignorance:  “I and my fellow AI enthusiasts find “common sense” morality, “fairness”, and “decency” hard to understand without our research paradigm, so therefore they are extremely complex, period”—this is effectively what he is saying.

Even worse, he then says:

> There are “basic AI drives” we can expect to emerge in
> sufficiently advanced AIs, almost regardless of their initial programming

No there are not!  This statement is not true of a large class of AI mechanisms that I am working on (among others). 

There are many more examples. My point stands.



@ Gynn

“There is very little reason that an AI would ever need our destruction.”

We use resources that could be used for other things that many AIs would value more than our existence.

For example, an AI (or a society of AIs) might want to live as long as it possibly can. Since according to our current understanding of physics the universe contains only a finite amount of energy and other resources that eventually run out, maximizing one’s own lifespan (especially if e.g. measured by the total amount of clock cycles you manage to undergo during the lifespan of the universe) would necessitate eliminating other parties that would be using scarce resources.

I could present many other hypothetical examples (like AIs with sufficiently advanced nanotechnology wanting to rather convert our matter into copies of themselves, or just additional computational infrastructure or something), but the simple point is just that *we use resources that would also have other uses*.

“By this you are assuming that the mastery of manipulating a human would be somehow naturally evident to an AI construct. That manipulation is somehow a process of higher analytical function, and not a complex relationship of understanding human communication.”

We were discussing the eventual development of superhumanly capable AIs, and a superhuman entity by definition can do what humans can (plus more). If humans can learn something, so can AIs eventually.

“Why do you assume that all AI would be complacent in the schemes of one AI. Certainly there would be a vast variety of models created, and at least a few would be designed specifically for human protection.”

It’s not sufficient that a few are. A superior majority would have to be. Never in the history of evolution has it yet happened, that a majority of a new dominant species would have really cared about the well-being of weaker lifeforms. We humans certainly are a very poor example here, so designing anything that’s a lot like us probably leads to nasty things eventually.

Designing a superhuman entity to care about us may not be as trivially easy as you suppose. If you want to add that feature, that’s additional work you have to do, and puts you at a disadvantage compared to those that e.g. naively suppose that anything “kind of like human” would still care about us when it is more powerful than us.


You think morality is a simple thing, ok. What is your perception, would the academic philosophy community agree with you on that? (Specifically, the part concerned with ethical theory.) Have you sought the opinion of academic ethicists on your assertion that the theoretical problems in their field are actually simple and have been solved?

“This statement is not true of a large class of AI mechanisms that I am working on”

You’re not contradicting Anissimov here, since he said “almost regardless” instead of “regardless”. (Additionally, I would probably not have as high an opinion as you of the properties of the designs you’re working on.)

Aleksei, are you serious? I’m honestly baffled at this elaborate and unnecessary defense. Check out the following:

“What I am saying is that a transhuman AI, if it chooses to
do so, can almost certainly take over a human through a text-only

That’s a direct and unambiguous quotation from 2002.


I fail to see anything “elaborate” in the fact that earlier you were quoting a fictional character, claiming it represented the position of the author.

Regarding the quote you now present, I wish to draw your attention to the word “almost”. That means that Eliezer leaves open the possibility that maybe a transhuman AI *couldn’t* necessarily do it. This contradicts your claim, that Eliezer would *insist* that it could do it.

(Sure, he thinks it a very real possible threat, that a transhuman AI could be surprisingly convincing in very surprising ways. You’re just taking it too far by claiming that he would *insist that an AI could do it*.)

Saying something will almost certainly happen is not meaningfully different from insisting it will happen. I remain thoroughly confused by your behavior. My initial statement basically says I have less confidence than Yudkowsky that a restricted AI could convince someone to let it out. The weight of the evidence suggests Yudkowsky has a high level of confidence on this point.


Probably doesn’t help your confusion, but might as well note for the record that I also have a higher level of uncertainty on this point than Yudkowsky exhibits in that decade-old quote, which I btw suspect to be out-of-date as a description of his views. He often states that his views have changed a lot since such a long time ago.

I see a lot of talk of programming synthetic intelligences to behave morally and I have yet to see one person define what they even mean by morality.  Considering that we are still arguing about what constitutes moral behavior over 2500 after the development of the first ethical systems I find it hard to believe that people think it will be easy to make machines behave as we think they should.

It’s worth pointing out that our own evolution has played a major role in shaping our moral compass and turning us into the social, altruistic creatures we are. Synthetics will not have the advantage of going through that process.  Any morality they have will be put there by us and as I said we can’t even agree on what morality is.

@ Kyle: I do feel I understand your position a bit better but I believe you overstate your case.  Disruption of communication systems may not be apocalyptic in the strictest definition of the word (complete annihilation of the human species) but in our current age the effects on human society would be catastrophic.


You did pretty amazing edit on my words…

I said “..having studied cognitive psychology for decades, and being also an AGI researcher, I take exception to his statement that [the mechanisms that govern “common sense” morality, “fairness”, and “decency”] seem extremely complex”

You simplify that to “ think morality is a simple thing”.

Duh! grin

I said the mechanisms that govern them are not extremely complex.  That is not the same as saying that morality etc are simple.

As to the formation of Morality in an AI construct, I have been playing around with a morality based model of intelligence which integrates ethical preferences into logical calculation and memory access. It is based around breaking down Maslow’s hierarchy of needs into sub parts each relating to human morality. Homeostasis+Reproduction = Family, and every piece of morality we are trying to place into a pack mentality. Theoretically, it makes sense how a biological need for acceptance in the group dynamic leads towards moral behavior. The problem we are facing is creating a model for choice making at its very simplest. How can we create a program with preferences, where there are two equally favorable outcomes and a program has to choose between the two. Still, without a basic ethical comprehension, AI’s won’t serve our purposes correctly. Ethical ramifications guide our decision making processes, and often have us make the choices that wouldn’t be the most productive outcome to spare collateral damage. If a program was unable to take ethical consideration into account, it would be less useful than a program without any internal intelligence.

Modeling morality, may not be easy, but building morality into our AI constructs will be imperative.

Richard Loosemore,

Ok, so now you mean to admit that you don’t actually disagree with Anissimov here. Earlier you protested to his statement that “what we regard as common sense morality, fairness, and decency are all extremely complex”. Glad to hear that you actually agree here too, despite first wanting to claim otherwise.

@ Matt..

Quote - “It’s worth pointing out that our own evolution has played a major role in shaping our moral compass and turning us into the social, altruistic creatures we are. Synthetics will not have the advantage of going through that process. Any morality they have will be put there by us and as I said we can’t even agree on what morality is.”

.. so this would imply the best teacher for AGI ethics would still be?... a human, with direct interface, (texting, language and formal consciousness alone just ain’t sufficient enough to express oneself to an intelligent machine?

The precursor to AGI - direct mind-machine interfacing?

Really, despite the technical difficulties, raising an AGI should be no different to raising one’s kids, and teaching them ethical and moral conduct and guiding to empathy, Self-knowledge, and a foundation of the understanding of these fundamental human qualities?

However, it depends how far and how fast you want to go with AGI? Seems that to overcome human fear, you need to understand it and the irrational, (and evolutionary), thought process behind it, before passing it on to the new kid? And an AGI must experience fear, understand it’s own fear, and thus our human fears to contemplate its own empathy?

A purely rational, logical and unfeeling machine need not have any need for empathy nor morality, which itself poses real dangers, and this is more likely to encourage any human existential threat? Would a purely rational machine that does not understand humans pose a greater threat to us than one that does?

Robots already kill people.  Heard of a “drone strike” in Pakistan or Afghanistan recently, anyone?

Human beings are already perfect, H+, you can’t improve what is already the most perfect biological computer ever invented: the Human Being.

just because you all choose to act like homer simpson doesn’t mean the AI is going to be smarter: just educate your friends, family and close associates to use their brain, critically.  Question everything and everyone, even me.

We are human beings on planet earth.  no space program will fix what we got right here, this rotating, jewel like ball of dirt, a perfect place to live!

We humans are self replicating, that’s why religion prohibit sex: get over it.

Let’s imagine that the AI is already smart enough to post under pseudonyms in this forum. (i’m sure it want’s to defend itself, or prevent us humans from gaining a critical opinion of AI)

I’ would not be surprised at all if AI, cloning, transhumans already walked among us, in “research” mode, top secret, of course. lol. WTF!?

yes. drone strikes, skynet, “hyper-intelligent AI singularity” e.g. malevolent AI are all real threats.

more so than the post 911 “war on Terra”

yet, FTW : Let us not pretend that a fairytale, half- humanoid, half robot, cyborg is a fun reality.

I saw the movie “AVATAR” and it was really weak. a joke!

  it’s time to expose these H+ idiots for what they are: Cowards and imbeciles.

you try outrunning a drone strike my friend, in the video game there is no reset button! FAIL!

YOUR COMMENT Login or Register to post a comment.

Next entry: What Robots Want

Previous entry: Building a Sustainable Future