Institute for Ethics and Emerging Technologies

The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.

Search the IEET
Subscribe and Contribute to:

Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view

whats new at ieet

Will World War 3 Be Prevented Because of Global Interdependence?

The Injustice of Sexism

NASA Can Get Humans to Mars by 2033 (Without a Budget Increase!)

Where does intelligence come from?

8th Beyond Humanism Conference

The Universal Balance of Gravity and Dark Energy Predicts Accelerated Expansion

ieet books

Philosophical Ethics: Theory and Practice
John G Messerly


instamatic on 'NASA Can Get Humans to Mars by 2033 (Without a Budget Increase!)' (May 26, 2016)

almostvoid on 'Where does intelligence come from?' (May 26, 2016)

almostvoid on 'The Future of PR in Emotionally Intelligent Technology' (May 25, 2016)

almostvoid on 'Rituals Improve Life According to Ancient Chinese Philosophers' (May 25, 2016)

almostvoid on 'Optimize Brain Health by Balancing Social Life with Downtime' (May 23, 2016)

instamatic on 'Faithfulness--The Key to Living in the Zone' (May 22, 2016)

R Wordsworth Holt on 'These Are the Most Serious Catastrophic Threats Faced by Humanity' (May 22, 2016)

Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List


Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month

Ethicists Generally Agree: The Pro-Life Arguments Are Worthless
May 17, 2016
(4286) Hits
(10) Comments

Artificial Intelligence in the UK: Risks and Rewards
May 12, 2016
(3321) Hits
(0) Comments

Nicotine Gum for Depression and Anxiety
May 5, 2016
(3027) Hits
(0) Comments

3D Virtual Reality Is the Best Storytelling Technology We’ve Ever Had
May 5, 2016
(2843) Hits
(1) Comments

IEET > Vision > Affiliate Scholar > John Danaher

Print Email permalink (0) Comments (3957) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

AIs and the Decisive Advantage Thesis

John Danaher
By John Danaher
Ethical Technology

Posted: Feb 16, 2013

One often hears it claimed that future artificial intelligences could have significant, possibly decisive, advantages over us humans. This claim plays an important role in the debate surrounding the technological singularity, so considering the evidence in its favour is worthy enterprise. This post attempts to do just that by examining a recent article by Kaj Sotala entitled “Advantages of Artificial Intelligences, Uploads and Digital Minds”.

One often hears it claimed that future artificial intelligences could have significant, possibly decisive, advantages over us humans. This claim plays an important role in the debate surrounding the technological singularity, so considering the evidence in its favour is worthy enterprise. This post attempts to do just that by examining a recent article by Kaj Sotala entitled “Advantages of Artificial Intelligences, Uploads and Digital Minds”.

The post is in two parts. The first part explains exactly how the claim of decisive advantage — or as I shall it “The Decisive Advantage Thesis” (DAT) — affects debates surrounding the technological singularity. The second part is then a more straightforward summary and consideration of the evidence Sotala presents in favour of the DAT (though he doesn’t refer to it as such). The post is somewhat light on critical commentary. It is primarily intended to clarify the arguments supporting the DAT and its connection to other important theses.

1. The Relevance of the DAT
Before looking at the evidence in its favour, it’s worth knowing exactly why the DAT is important. This means tracing out the logical connections between the DAT and other aspects of the debate surrounding the technological singularity. As I have pointed out before, much of that debate is about two core theses. The first of these — the Explosion Thesis — states that it is likely that at some point in the future there will arise a greater-than-human artificial intelligence (AI+) which will create even more intelligent machines, which will in turn create more intelligent machines, in a positive feedback cycle. This is referred to as the intelligence explosion. To put it slightly more formally:

The Explosion Thesis: It is probably that there will be an intelligence explosion. That is: a state of affairs in which for every AIn that is created, AIn will create a more intelligent AIn+1 (where the intelligence gap between AIn+1 and AIn increases) up to some limit of intelligence or resources.

There are several arguments in favour of the Explosion Thesis, some of which are surveyed in Chalmers (2011), but the classic statement of the argument can be found in the writing of I.J. Good. As he put it:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

As you can see, embedded within this argument is the DAT. The key motivating premise of Good’s argument is that an ultraintelligent machine will be better than human beings at designing intelligent machines. But that premise itself depends on the assumption that those machines will be better than human beings across all the key cognitive dimensions needed to create intelligent machines. But that is simply to say that those machines will have a decisive advantage over humans across all those dimensions. If the advantages were not decisive, we would already have an intelligence explosion. Hence, the DAT provides significant support for the Explosion Thesis.

The second core thesis in the singularity debate — the Unfriendliness Thesis — holds that prospect of AI+ should be somewhat disquieting. This is because it is likely that any AI+ that is created would have goals and values that are inimical to our own.

The Unfriendliness ThesisSV: It is highly likely (assuming possibility) that any AI+ or AI++ that is created will have values and goals that are antithetical to our own, or will act in a manner that is antithetical to those values and goals.

The connection between the DAT and the Unfriendliness Thesis is not direct, as it was in the case of the Explosion thesis, but it reveals itself if we look at the following, “doomsday” argument:

  • (1) If there is an entity that is vastly more powerful than us, and if that entity has goals or values that contradict or undermine our own, then doom (for us) is likely to follow.
  • (2) Any AI+ that we create is likely to be vastly more powerful than us.
  • (3) Any AI+ that we create is likely to have goals and values that contradict or undermine our own.
  • (4) Therefore, if there is AI+, doom (for us) is likely to follow.

The second premise, which is crucial to the overall argument, relies on the claim being made by the DAT. The “unfriendliness” of any potential AI+ is only really disquieting if it has a decisive power advantage over us. If an AI+ had unfriendly goals and values, but (a) didn’t have the means or the ability to achieve those goals; and (b) the means and abilities it did have were no greater than those of other human beings, then there would be nothing too great to worry about (at least, nothing greater than what we already worry about with unfriendly human beings). So, once again, the DAT provides support for a key argument in the singularity debate.

2. Support for the DAT
Granting that the DAT is important to this debate, attention turns to its little matter of its truth. Is there any good reason to think that AI+s will have decisive advantages over human beings? Superficially, yes. We all know that computers are faster and more efficient than us is certain respects, so if they can acquire all the competencies needed in order to secure general intelligence (or, more precisely, “cross-domain optimisation power”) there is something to be said in favour of the DAT. In his article, Sotala identifies three general categories of advantages that AI+s may have over human beings. They are: (a) hardware advantages; (b) self-improvement advantages; and c) cooperative advantages. Let’s look at each category in a little more detail.

A human brain is certainly a marvelous thing, consisting of approximately a hundred-billion neurons all connected together in an inconceivably vast array of networks, communicating through a combination of chemical and electrical signalling. But for all that it faces certain limitations that a digital or artificial intelligence would not. Sotala mentions three possible advantages that an AI+ might have when it comes to hardware:

Superior processing power: Estimates of the amount of processing power required to run an AI range up to 1014 floating point operations per second (FLOPs), compared with up to 1025 FLOPs for a whole brain emulation. In other words, an AI could conceivable do the same work as a human mind with less processing power, or alternatively it could do much more work for the same or less. (At least, I think that’s the point Sotala makes in his article; I have to confess I am not entirely sure what he is trying to say on this particular issue).
Superior Serial Power: “Humans perceive the world on a characteristic timescale” (p. 4), but an artificial intelligence could run on a much faster timescale, performing more operations per second than a human could. Even a small per-second advantage would accumulate over time. Indeed, the potential speed advantages of artificial intelligences have led some researchers to postulate the occurrence of a speed explosion (as distinct from an actual intelligence explosion). (Query: Sotala talks about perception here, but I can’t decide whether that’s significant. Presumably the point is simply that certain cognitive operations are performed faster than humans, irrespective of whether the operations are conscious or sub-conscious. Human brains are known to perform subconscious functions much faster than conscious ones.).
Superior parallel power and increased memory: Many recent advances in computing power have been due to increasing parallelisation of operations, not improvements in serialisation. This would give AIs an advantage on any tasks that can be performed in parallel. But it also creates some problems for the DAT as it may be that not all intelligence-related operations are capable of being parallelised. In addition to this though, AIs would have memory advantages over human beings, both in terms of capacity and fidelity of recall, this could prove advantageous when it comes to some operations.

Humans have a certain capacity for self-improvement. Over the course of our lives we learn new things, acquire new skills, and become better at skills we previously acquired. But our capacity for self-improvement faces certain limitations that artificial intelligences could overcome. This is what Sotala’s second category of advantages is all about. Again, he notes three such advantages:

Improving algorithms: As is well-known, computers use algorithms to perform tasks. The algorithms used can be improved over time. They become faster, consume less memory, and rely on fewer assumptions. Indeed, there is currently evidence (cited by Sotala in the article) to suggest improvements in algorithm efficiency have been responsible for significant improvements on a benchmark production planning model in the recent past.
Creating new mental modules: It is well-known that specialisation leads can be advantageous. Improvements in productivity and increases in economic growth are, famously, attributed to it. Adam Smith’s classic paean to the powers of the market economy, The Wealth of Nations, extolled the benefits of specialisation using the story of the pin factory. The human brain is also believed to achieve some of its impressive results through the use of specialised mental modules. Well, argues Sotala, an AI could create even more specialised modules, and thus could secure certain key advantages over human beings. A classic example might be by creating new sensory modalities, that can access and process information that is currently off-limits to humans.
New motivational systems: Human self-improvement projects often founder on the rocks of our fickle motivational systems. I know this to my cost. I frequently fail to acquire new skills because of distractions, limited attention span, and weakness of the will. AIs need not face such motivational obstacles.

Humans often do amazing things by working together. The CERN project in Geneva, which recently confirmed the existence of the Higgs-Boson, is just one example of this. It was through the combined efforts of politicians, institutions and thousands of scientists and engineers, that this endeavour was made possible. But humans are also limited in their capacity for cooperative projects of this sort. Petty squabbling, self-interest and lack of tact frequently get in the way (though these things have their advantages). AIs could avoid these problems with the help of three things:

Perfect Cooperation: AIs could operate like hiveminds or superorganisms, overcoming the cooperation problems caused by pure self-interest. (Query: I wonder if this really is as straightforward as it sounds. It seems to me like it would depend on the algorithms the AIs adopt, and whether some capacity for self-interest might be a likely “good trick” of any superintelligent machine. Still, the analogy with currently existing superorganisms like ant colonies is an intriguing one, though even there the cooperation is not perfect).
Superior Communication: Human language is vague and ambiguous. This is often a motivator of conflict (just ask any lawyer!). Machines could communicate using much more precise languages, and at greater speed.
Copyability: One thing that leads to greater innovation, productivity and efficiency among humans is the simple fact that there are more and more of them. But creating more intelligent human beings is a time- and resource-consuming operation. Indeed, the time from birth to productivity is increasing in the Western world, with more time spent training and upskilling. Such timespans would be minimised for AIs, who could simply create fully functional copies of their algorithms.

All these advantages are summarised in the table above. In addition to them, there are a variety of advantages related to the avoidance of human biases, but I won't get into those here.

Taken together, these advantages form a reasonably powerful case for the DAT. That said, they are not quite enough in themselves. If we go back to our earlier concern with the singularity debate, we see that the links between these kinds of advantages and the Explosion Thesis or the Doomsday Argument are a little sketchy. One can certainly see, in general terms, how the kinds of cooperative advantages identified by Sotala might provide succour for the proponent of the Doomsday Argument, or how self-improvement advantages of the sort listed could support the Explosion Thesis, but it looks like more work would need to be done to make the case for either fully persuasive (though the former looks in better shape). After all, it must not only be shown that AI+s will have decisive advantages over human beings, but that that those advantages are of the right kind, i.e. the kind that could lead to a recursive intelligence explosion or a doomsday scenario. Still, I think there is plenty to work off in this article for those who want to support either position.

John Danaher holds a PhD from University College Cork (Ireland) and is currently a lecturer in law at NUI Galway (Ireland). His research interests are eclectic, ranging broadly from philosophy of religion to legal theory, with particular interests in human enhancement and neuroethics. John blogs at You can follow him on twitter @JohnDanaher.
Print Email permalink (0) Comments (3958) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Perceiving the Health Impact of Evolution by Natural Selection

Previous entry: Present Shock


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @