A conception of evil that carries over from the Abrahamic religions into secular modernity is that of the ‘disorganization of the soul’. The idea here is that evil isn’t something separate from good but something that arises from the malformation or malfunctioning of good parts. Thus, Satan in Milton’s Paradise Lost is God’s best angel gone rogue, the template for the villains faced by comic book superheroes. Many if not most mental illnesses, from neurosis to autism, are defined as some sort of ‘disorder’. In a similar but grander vein, cybernetics founder Norbert Wiener regarded entropy – the ultimate expression of disorganization in physics – as the material equivalent of evil, the source of all suffering, decay and death.
As we shall see below, this conception of evil is about to take a new and more disorienting turn.
A common feature of the above versions of the ‘disorganized soul’ is the opening up of so many possible states of being in the world that none takes any sustained precedence in one’s thought and action. Thus, Milton’s Satan and Batman’s Joker thrive in a strife-riven world in which otherwise good souls are turned into implacable foes because they cannot see beyond their differences to a common path. Similarly, the ‘disordered’ character of mental illness comes from the subject’s failure to see that thoughts and actions that work well in one context don’t generalize over all contexts. Finally, the ‘evil’ of Wiener’s entropy appears in information theory, when communication is perverted by a degraded channel in which the signal can’t be reliably detected from the noise.
Francisco Goya’s famous etching, ‘The Sleep of Reason Produces Monsters’, which depicts the awakening of demons in dream state, epitomizes the disorganized soul. Here dreaming produces a sense of reality comparable to the wakened state, as novel combinations of elements from the dreamer’s waking life generate new experiences in the unconscious. In this context, the potential for evil arises in one of two ways: Either the vividness of the dream state becomes confused with that of the wakened state or, perhaps more perniciously (at least according to that great enemy of utopian politics, Sigmund Freud), novel combinations that appear attractive in the dream state become the basis for action in the wakened state.
Here it is worth recalling a feature of Aristotle’s cosmology that made it such a stabilizing force in Christendom. Aristotle held that everything possible was already realized at some point in history, which he understood as a cyclical process. In other words, humanity has already seen what is and isn’t possible and on that basis can judge the viability of any prospect put on the table. The historical record thus provides a baseline for what is and isn’t ‘natural’. To be sure, this line of thought arguably held back the Scientific Revolution by three centuries, yet it continues to inform Roman Catholic policies on adventurous biomedical interventions that would alter the genome or redeploy organic matter to radically new ends, as in the case of stem cell research.
In The Proactionary Imperative: A Foundation for Transhumanism, I argued that the intellectual move that broke the spell of Aristotle’s world-view was the identification of ‘what is possible’ with what is logically or conceptually coherent, rather than what is empirically probable. This began to happen in the late medieval period – I attribute it to John Duns Scotus -- and effectively shifted how humans thought about possibility: from the standpoint of the observer of possibilities (Aristotle’s view) to that of the generator of possibilities (the modern view, though originally an observation about our similarity to God). The knock-on effects of this shift were felt slowly but systematically. For example, it moved atomism from being a metaphysical world-view to a scientific research programme.
A deeper consequence of the shift was that the virtual was set alongside the potential. Whereas Aristotle could imagine the ‘constitution of society’ only in terms of what is potentially available to a given society according to its unique natural and civil history, Thomas More could imagine the ‘constitution of society’ in terms of what is virtually available by bringing together the best bits of all previous societies. This is what he called ‘Utopia’. Of course, this sense of virtuality opens the door to our creating not only the best but also the worst of all possible worlds – as well as everything in between. And what More could only do in his imagination, which later philosophers developed into the ‘social contract’, is now routinely done in science fiction novels, computer simulations – and in the future perhaps in such artificial environments as star arks and extraterrestrial settlements.
Indeed, the ease with which we can combine possibilities into coherent worlds removes much of the traditional phenomenology of evil, which involves a sense of recoiling in the face of abnormality or liminality: a standing back from the precipice. This is what George W. Bush’s bioethics czar Leon Kass called the ‘wisdom of repugnance’ in a recent influential statement of ‘bioconservatism’. But Kass is hardly an outlier. Even within analytic philosophy, which is notorious for its content-stripped reliance on logic, ethics still trades in moral intuitions, on the assumption that we ordinarily possess a fairly robust and nuanced sense of the difference between ‘right’ and ‘wrong’. And even the most futuristic versions of naturalistic ethics – which envisage the propriety of using ‘moral enhancing’ drugs to boost the level of goodness in society – rely on the very same intuitions.
But this sense of convergent intuitions on what is right and wrong may simply reflect a lack of imagination about the sort of world in which we might live. Indeed, the ease with which we can nowadays conjure coherent alternative realities, including ones that overcome or sublimate death, is gradually undermining just such intuitions. Death, after all, has long been the Polestar by which we set our moral compass. Many if not most of our moral intuitions are predicated on the mortality of humans. This is what makes murder heinous, self-sacrifice noble and suicide problematic. It equally informs the idea that life at all stages is ‘precious’. But what happens if death is not only preventable but also reversible and even commutable (i.e. one may die biologically yet live on digitally)?
The last prospect is especially interesting. It amounts to ‘functionalizing’ life. In other words, life is not tied to a particular material realization but may be multiply embodied according to certain procedures or rules as long as what is produced does ‘the work of living’. In that case, one may ask how much is ‘lost in translation’ between one embodiment and another – and whether that requires some sort of compensation. In this respect, my killing you may be regarded as an economic transaction in which I pay the cost of, say, reviving your corpse (if possible) or constructing and maintaining your digital existence, including the memories which may have originally motivated me to kill you. In either case, the transaction would be primarily about reinstating you in the world, for which I would be legally responsible. Murder would thus morph into a form of indefinite adoption, an upgraded version of ‘You break it, you own it’.
Whatever one makes of this particular proposal, the prospect of many such possible ‘alt-life’ scenarios calls into question the soundness of our ordinary sense of right and wrong. Moreover, these doubts extend further once the value of leading a flourishing life in general is placed above the value of leading a specifically human life. Forty years ago Peter Singer placed this issue squarely on the mainstream philosophical ethics agenda, only to face enormous controversy from many quarters, which rumbles on to this day. Yet at the same time, the idea that the value of human life may cut against the value of life as such has been somewhat normalized through the proliferation of ‘posthumanist’ discourses. In many circles, such discourses are seen as the epitome of future-looking moral propriety, especially given the ecologically motivated concern for there being ‘too many’ humans to allow for the full range of beings to flourish on Earth.
A vivid way to appreciate just how much our moral intuitions have been upended by the plethora of possibilities at our disposal is for the reader to consider how he or she would rank order the following six states of the world in terms of the moral harm that would result from them:
- Total ‘dehumanization’ of humanity (e.g. the Marx-style radical alienation that reduces the value of labour to mechanical work in the production process or perhaps the totalitarian consumerism of Brave New World).
- Partial ‘dehumanization’ (arguably the world we live in today, in which some people retain control of their labour and lifestyle -- and some don’t)
- The extinction of humanity (i.e. in the Darwinian sense of the elimination of the entire species).
- A mass depopulation of humans, falling short of planetary extinction (e.g. as a result of a major global climate catastrophe or a nuclear exchange).
- An erasure of the human-nonhuman boundary (i.e. involving significant xenotransplantation, in which the transplanted substances may be organic tissue and/or silicon interfaces, resulting in hybrid creatures such as cyborgs)
- A re-specification of the ‘human’ to be substrate-neutral (i.e. a ‘human’ need not be the descendant of another member of Homo sapiens but rather could be a status conferred on any suitably qualified entity, as might be administered by a citizenship test or even a Turing Test).
That judgements may vary radically in an easily reconfigured moral universe sums up tomorrow’s problem of good and evil, aspects of which have already begun to bother the consciences of those living now.
Steve Fuller is Auguste Comte Professor of Social Epistemology at the University of Warwick. His next book is Post-Truth: A Social Epistemology for Our Times (Anthem).