When Mass Suicide Might be Morally Right
Phil Torres
2014-12-04 00:00:00

The amity-enmity problem points to terror, but there is an error possibility as well (which no one's yet explored in detail). It could be that the superintelligence is an ally of ours, genuinely wanting us to flourish. But just as being superintelligent doesn't entail that one is benevolent, according to Nick Bostrom's "orthagonality thesis," so too does it not entail infallibility.[1] In other words, the AI could be our friend but nonetheless nudge us over the cliff of extinction purely on accident – maybe as the result of a "stupid mistake." Given the power that a superintelligence would wield in the world, even a tiny small mistake could have catastrophic consequences for humanity. Call this the clumsy fingers problem. (I discuss these and related issues in detail in my forthcoming book, The End: What Religion and Science Tell Us About the Apocalypse.)

So, given the risks, should we attempt to create a superintelligence? Many agree that the benefits could be profound: a super-mind could help run the world economy, figure out how to merge quantum mechanics with general relativity, and even obviate existential risks like ecophagic self-replicating nanobots released into the environment by a suicidal misanthropic psychopath, perhaps motivated by apocalyptic religious views. (Since there are many more deranged individuals than malevolent groups, and many more malevolent groups than evil empires, the probability that a "lone wolf" ruins it for everyone will increase significantly in the future, as two types of technologies in particular -- biotechnology and nanotechnology -- become more and more accessible.)

But there may be an additional reason for us to pursue the creation of a superintelligence, a reason that’s moral in nature. As far as I know, no one has yet discussed this.

Consider the philosopher Derek Parfit’s claim that the difference between 99% and 100% of humanity dying is far greater than the difference between peace on earth and 99% of humanity being destroyed. In other words, it's much betterfor a horrific catastrophe to befall humanity than an extinction event.

The reason is that future humans are valuable no less than present humans, and an extinction scenario would deny future generations the opportunity of life. As such, it would deprive the universe of a potentially large amount of meaning and value.

The value of human life stems from the fact that we are sentient creatures with the capacity for happiness and pleasure. We can live meaningful lives engaged in activities that we find worthwhile and fulfilling. But our capacities for desirable experiences are limited. Just as a dog can't experience moments of "flow," or a sense of wonderment staring at the midnight firmament, or the feeling of being content with one's accomplishments in life, so too are there experiences like "flow" – but beyond it – that we can't have.

Another type of mind, though – perhaps one that arises from the circuits of a computer and exhibits a form of "strong" superintelligence – might have access to such experiences. It could potentially attain levels of happiness and satisfaction that exceed our capacities to the same degree that our capacities exceed those of a guinea pig's. In a phrase, a superintelligence's life could be far more valuable than ours, just as ours is more valuable than a cricket's.

On the Parfit model, then, not creating such a mind would be to deny all the potential meaning and value that it (or perhaps a whole population of similar beings) would introduce to the universe. Not creating a superintelligence would be tantamount to a preemptive existential strike against it – the AI equivalent of Parfit’s 100% scenario above. Thus: for the exact same reason that preventing an existential risk is a high moral obligation for present-day humans, so too is the creation of a superintelligence.

(One could extend this argument to running simulations full of conscious simulants -- not just "ancestral simulations," but "progeny simulations." If humanity found itself unable to multiply and prosper throughout the cosmos, there's always the possibility of simulating future generations in an ersatz universe running on a massive supercomputer. Assuming that functionalism is true, this would accomplishing the exact same end (as a simulated mind would be cognitively indistinguishable from a "real" mind).[2])

But there are some problems for humanity here.

Let's make a few assumptions explicit: first, let’s assume a utilitarian ethics whereby moral actions are those that maximize the overall happiness of sentient life on earth. And second, let's assume that the amount of happiness (or some similarly desirable cognitive-emotional property) a single superintelligence could experience is greater than that of every human being added together. By analogy, one might argue that the amount of happiness and pleasure a single human can experience is greater than that of all the ants throughout the world in aggregate. While the first assumption is open to philosophical debate, the second one seems plausible if a genuinely superintelligent mind one day explodes onto the scene.

The resulting situation may be morally catastrophic for humanity. Imagine that the amity-enmity problem doesn’t get solved in time, and the recursively self-improving AI is maliciously unfriendly towards Homo sapiens. Imagine that the AI’s unfriendliness takes the form of humanicidal sadism, which it engages in because torturing and killing human beings provides it genuine happiness and pleasure. From a purely ethical perspective, what is the best outcome here? What course of actions would maximize the overall happiness of sentient life on earth?

The answer isn’t “Unplugging the superintelligence,” since the amount of happiness lost by an AI assassination would, ex hypothesi, exceed the amount of happiness gained by humanity (because we managed to survive). Given a utilitarian framework, the most ethical action would be to let the AI torture and kill humanity. A universe in which such events occur would contain more overall well-being than one in which such events don't occur. Thus, not only would the AI be an existential threat to us, but it would have the moral high ground.

​Consider a less extreme situation, one in which (to use a now-hackneyed example) all a superintelligence wants to do is make paperclips. However strange it appears to us, this activity gives the AI genuine happiness; it's what gives its life meaning. The problem is that humans are full of atoms that could be used to make paperclips, which means that we’d eventually be disassembled into our constituent parts. (This is an instance of the indifference problem; it's neither error nor terror.)

Should we let this happen? It depends: if the total misery caused by human annihilation is less than the total happiness gained by the AI – again, given its vastly greater capacities for pleasure – then it may indeed be moral for humanity to be destroyed. There are many analogous situations to put this in context: when we build a house, for example, we don't ask all the worms, beetles, and squirrels living on the property for their permission before starting construction. We just start building. Our happiness outweighs that of these "lesser" beings.

To summarize: if one accepts Parfit's argument, then we may be morally obligated not only to minimize the probability of an existential risk occurring,[3] but to create a superintelligent being with cognitive-emotional capacities much greater than ours. Not doing this would deny an amount of happiness and meaning that could, potentially, vastly exceed that which humanity could possibly generate. (Put differently, rather than having biological children, why not give birth to an artificial child -- one with an ever greater capacity for pleasure?) And second, once a superintelligent AI has arrived, if its capacity for pleasure (and pain) is greater than ours, collectively, then the most ethically desirable situations may be ones in which humanity follows the Dodo into extinction.

Notes:

[1] This gestures at a different sort of orthagonality thesis: how clumsy one is may not be tightly linked to one's intelligence. A genius might also be a goofball.

[2] On the other hand, one might argue that we shouldn't simulate minds in the future for the exact same reason that God probably doesn't exist: a world full of evil, especially "natural" and gratuitous, is morally inexcusable. I am tempted by this later option, as I find the argument from evil cogent.

[3] This is Bostrom's "Maxipok" rule