IEET > Vision > Fellows > Russell Blackford > HealthLongevity > Implants
Mutants, cyborgs, AI & Androids

What does it mean to be human? We are, of course, biological creatures, and the question allows a literal answer when approached at that level. Modern humans are classified biologically as Homo sapiens sapiens. We are definable by our genetic code, and are closely related to chimpanzees and bonobos—somewhat less so to gorillas, orang-outangs and gibbons. It is to our own species that Jared Diamond is alluding in the title—and text—of his book The Rise and Fall of the Third Chimpanzee (1991).

We are also physical entities, located in space and time.
We occupy a relatively confined place on or near the surface
of the planet Earth—which is one of the lesser planets of
our local solar system and is separated from the other
planets by much greater distances than any we can encounter
on the Earth’s surface. Our solar system, in turn, occupies
a tiny place in a universe that contains uncountable
galaxies, each with many millions of stars. Yet, as physical
entities on our planet, we are special in one way: we are
extraordinarily complex beings. Our brains and nervous
systems contain billions of neurons, with a seemingly
unmappable intricacy of interconnections. They are also
intimately connected with the other parts of our bodies, and
all aspects of their functioning. Nothing else that we have
encountered to date matches our extreme organisational and
functional complexity, not even the most closely related
apes, whose big brains cannot compete with the highly
developed human cerebral cortex.

Most important, we are social and fully moral beings. Our current knowledge of human evolution suggests that our immediate precursors were already social animals. As Peter Singer puts it, ‘We were social before we were human.’ (1) From this, Singer argues that our evolutionary ancestors must have restrained their behaviour towards each other to the extent required for their societies to function; they showed the beginnings of morality. But it is not that we merely happen to exercise some restraint towards each other as we pursue our goals; we also believe that this is the rightway to act and to live. We perceive each other as having moral worth, as being worthy of moral respect.

Taken at its broadest, to respect X is to perceive X as providing a constraint on our individual self-interest or spontaneity. We must take X into account before we act unthinkingly, or as we think best for ourselves. And there are some things that we feel it is just not morally right to do to certain kinds of entities, or beings.

It is not only our fellow humans who command from us a degree of moral respect. We share with our simian relatives, and with many other animals, a vulnerability to physical suffering—and this gives us all a certain moral status. Even if it is justifiable to kill non-human animals, it is not so justifiable to treat them cruelly. Of course, it is frequently claimed that human beings have a special moral worth beyond that of all other animals. But what makes us so special? In answering this, many thinkers—most recently Francis Fukuyama in his book Our Posthuman Future: Consequences of the Biotechnology Revolution (2002)—have attempted to identify important characteristics we possess but that are not possessed by other animals (or are possessed by them in some lesser degree).

While other animals possess a capacity for conscious experience, we believe that few of them (perhaps just a few mammalian species) are conscious of themselves as individuals in anything like the way we are. None possesses our deep, sometimes troubled, sometimes joyous, inner lives. In addition, we have capacities for reason, choice, life planning, caring, emotions, language, and the many practices of cooperation, technology, tradition and art that comprise our cultures—all to an extent that makes the equivalent capacities of other animals, even chimpanzees and bonobos, seem rudimentary.

By our DNA, then, we are identifiable as biologically human. But it is the vulnerabilities and capacities that I have tried to sketch out—I’ll call them our morally significant characteristics—that make us moral beings. Non-human animals possess some morally significant characteristics, but not the full range possessed by humans, which entitle us to a distinctive level of moral respect, kindness and consideration.

In addition, some of our morally significant characteristics, such as our superior capacities for thought and feeling, enable us to understand that moral constraints apply to us. Our nature as moral beings consists in both our special entitlement to moral respect and our capacity to understand and assume moral burdens.

Problems arise, however, when we notice that there is an incomplete congruity between our species membership and our moral status. To begin with, our species membership alone is not sufficient for us to have all of the morally significant characteristics that I have identified. Not all biologically or genetically human entities possess the capacities for reason, choice and life planning, or an inner life of any depth. All these things develop after conception. Indeed, a human zygote or an early embryo does not even have a nervous system; it cannot suffer physically or emotionally, as can a chimpanzee or a kitten. Perhaps we owe a zygote or an embryo some moral respect, but this is not obvious. Indeed, it is notoriously controversial. Different intuitions about such questions lead people to different beliefs about abortion, the withdrawal of medical life support to those in persistent vegetative states, and the fate of severely disabled newborn babies—among other heated issues of social policy.

Furthermore, biological humanity is not always necessary for moral worth, and it need not be a prerequisite for moral responsibility. In theory, at least, there could be fully moral beings that are not human. I have already noted that many non-human animals are capable of physical suffering, and this alone entitles them to some moral respect. Indeed, as every pet owner knows, at least some non-human animals seem capable of emotions and emotional suffering. What’s more, their suffering cannot simply be ignored; it imposes moral constraints on how we can treat them. Admittedly, no non-human animals that we have encountered on Earth are fully moral beings, but some great apes may come fairly close. Even though they do not possess the full range of relevant characteristics and cannot be taught moral concepts, we must treat them with considerable moral respect. Peter Singer has argued that we should extend the most basic ethical and legal principles, such as a right to life, a right not to be tortured, and a right not to be imprisoned without due process, to all of the great apes. Ultimately, we should try to close the perceived moral gap between ourselves and all non-human animals. (2)

But what about intelligent entities that we may create in future—or that we or our descendants may even become: individuals with altered biology (‘mutants’), or with bodies that are partly organic, partly an assemblage of mechanical and cybernetic devices (‘cyborgs’), or with fully non-biological ‘brains’ (so-called ‘AIs’—‘artificial intelligences’—or, if they more or less resemble us in gross morphology, ‘androids’)? I contend that all of these might be fully moral beings, irrespective of whether they are, in other ways, human.

To some extent, we are already mutants and cyborgs. Through vaccination, we have used technology to enhance our natural immunity to certain diseases. The contraceptive pill alters the functioning of the female reproductive system by artificial means. Moreover, we have blurred the boundaries between our bodies and the inorganic world with contact lenses, tooth fillings, implanted teeth, pacemakers, prostheses and numerous surgical devices. I expect we will go much further in transforming our biology—probably by tweaking our genetic code—and in merging our bodies with non-biological artefacts.

As we continue to do this, are we losing anything valuable? That question will confront us increasingly as technologies that can transform or invade our bodies become more powerful and more pervasive. Similar issues have long been dramatised and debated in science fiction novels, short stories and movies—with their vivid representations of mutants, cyborgs, AIs and androids, not to mention creatures from other planets—but the debate is now spilling over into the mainstream intellectual culture. It is finding its way into books and articles by philosophers, bioethicists, lawyers, cultural critics, and thinkers in many other fields.

Some, though I think a minority, are strongly inclined to embrace radical changes that will accelerate the processes of biological mutation and ‘cyborgisation’. There is now a social and philosophical movement, transhumanism, which specifically advocates the enhancement of human beings through technological means, with goals that include all of the following: an increased maximum lifespan; enhanced levels of health; and improved physical and cognitive abilities—all beyond the upper level of what, historically, has been the spectrum of normal human functioning. Other thinkers are hostile to such developments or more sceptical about them.

In August 2003 I was privileged to attend a forum entitled ‘Debating the Future’, organised in Toronto by a local transhumanist group with the perhaps slightly frightening name ‘Betterhumans: I don’t find the name all that frightening, but it does have unwanted connotations of the twentieth-century eugenics movement. Indeed, it is difficult to discuss transhumanism at all without evoking the spectre of discredited eugenic theories, though the two have almost nothing in common. In particular, most transhumanists are strongly opposed to state intervention in reproductive decisions and family formation, and their technological ambitions have nothing to do with racial theorising about ‘higher’ and ‘lower’ categories of humanity.

The event in Toronto was a debate between an eminent Australian-Canadian bioethicist, Margaret Somerville, and an equally eminent American bioethicist, James Hughes, secretary of the World Transhumanist Association. They argued about the ethical implications of human cloning, radical life extension, nanotechnological engineering (the fabrication of products by manipulation of individual molecules or atoms) and the genetic engineering of human beings. Events such as ‘Debating the Future’ will doubtless become increasingly common. It’s time to begin them in Australia.

This forum was accompanied by two lengthy op-ed pieces, written by Somerville and Hughes and published on the same day in the Canadian newspaper the Globe and Mail. (3) Somerville’s piece displays a conservative approach to technological innovation. Its main contention is that ‘we must have a profound respect for the natural—especially for human nature itself’. Somerville fears that we are somehow risking our humanity by going down a path of mutating ourselves biologically, cybernetically and otherwise. In her presentation in Toronto, she appeared to condemn cloning as a method of reproduction partly on the basis that we must show moral respect not only to ‘the natural’ but also, more specifically, to the natural method of reproduction. If that is the argument, it seems to presuppose that this method could have some interest or value of its own that could be harmed if we chose to reproduce in other ways, but that assumption cannot be taken seriously.

Our status as fully moral beings is not in danger. In principle, we might alter our biology so much that we are no longer genetically Homo sapiens. Alternatively, or perhaps in addition, we might enhance every part of our bodies with nonbiological devices, from contact lenses that give us better-than-normal eyesight to tiny robots in our bloodstreams to assist in cell repair and combating infection. Yet, however much we change, nobody is talking about transforming us into mutants or cyborgs that are less capable of reason, choice, plans, emotions and culture. That would be ‘dehumanising’ in an easily understandable sense, but no sane person would propose such a thing. If there is any danger of that kind of dehumanisation happening unintentionally, which I rather doubt, it is more likely to come from the advertising industry than from the transhumanist movement.

No matter how much we might be enhanced—and might be able to avoid some sources of pain and disease—we would still be capable of suffering. We might be able to live longer, but we would eventually die, and death would still be a misfortune as long as we remained attached to life by our relationships, personal projects and individual fascinations. However much we become mutants or cyborgs in future decades, we will remain fully moral beings and should treat each other just as we should treat fellow human beings right now: with kindness and consideration—with humanity.

That does not mean that there are no dangers at all in continuing to mutate and ‘cyborgise’ ourselves. Somerville and others have raised one legitimate concern that must be confronted in any discussion of technologies that have the potential to enhance human abilities: the issue of social justice. The fear is that the benefits of enhancement would be distributed very unevenly, increasing the abilities, and furthering the interests, only of those who are already wealthy, powerful and well educated. This could increase divisions within societies and between rich and poor societies.

The issue of social justice must be thought through before we go too far in changing ourselves, or in establishing the rules by which it happens. Nonetheless, I am reasonably comfortable with the prospect of human enhancement, and with incremental changes to our bodies and minds. We can deal with the problems, and nothing of deep significance will be lost.

AIs and androids might be another matter. Is it possible, even in principle, to create them in the sense that I have defined: as entities that would be conscious and self-aware, but with inorganic ‘brains’? Could AIs and androids ever have the status of fully moral beings? Philosophers of mind continue to argue about whether our consciousness and related mental characteristics could be computational phenomena, or whether consciousness depends upon something else in our chemistry and about the stuff of which we are composed. (4) Though the issue is very far from settled, it is arguable that our mental characteristics do not arise from our chemical composition, but from our extremely complex organisation and functioning. In principle, these could be replicated by the functioning of sufficiently powerful computer hardware.

Simulation, of course, is not the same as replication. After all, a computer model of a tornado, however accurate and detailed, does not possess the twister’s ability to uproot trees or toss around motor cars. However, David J. Chalmers, a leading philosopher of mind, argues persuasively that simulation is replication in the case of consciousness. (5) He proposes that some properties, including mental characteristics, are ‘organisationally invariant’ across underlying systems, whether physical or computational. I am attracted to this argument, since there seems to be no reason for a mind to ‘need’ one underlying system rather than another. It is not like the tornado, which needs air to cause damage in the real world.

If this is all so, it has the dramatic implication that we might be able to devise artificial, non-biological intelligences that possess consciousness—and other characteristics that, taken together, imply moral worth and a capacity for moral understanding. In short, AIs and androids would be non-biological entities—certainly not biologically human—but they might be fully moral beings, like ourselves. If they existed, we would owe them moral respect, and demand that it be reciprocated.

No deep moral law is broken if we create such beings. Indeed, they might enrich our lives and our culture. Yet a cautionary note seems in order. There is more than a grain of wisdom in the admittedly lurid Hollywood movies that deal with futuristic, non-human intelligences. Blade Runner (1982), for example, depicts conflict between humans and the android beings known as ‘replicants’. Interestingly, the line between humans and replicants often seems to be blurred. The anti-hero, Deckard, seems replicant-like in some of his attitudes and actions (many fans of the movie will argue at length that he must be a replicant); conversely, the main replicant characters, Batty, Rachel and Pris, seem almost human. In Blade Runner, what matters most is not biological humanity but the complex of characteristics that give us our moral status, including the ability to suffer and to yearn, as Batty and the other replicants certainly do.

Nonetheless, humans and replicants stand opposed in the near-future world of Blade Runner. In the real world, creating conscious, purposive beings that are radically different from and discontinuous with ourselves would seem a risky step. These beings might have ambitions diametrically opposed to our own, leading to a situation in which they became our competitors and enemies. Hollywood moviemakers are not merely being melodramatic when they show this in the Terminator series (1984 and after) or The Matrix (1999) and its inevitable sequels (and the related Animatrix set of short movies). These depict desolate future worlds in which posthuman machine intelligences have rebelled against their human creators.

If machine intelligences—such as AIs and androids—did not become our enemies, they might, instead, end up as our suffering slaves, as in the poignant (if tonally uneven) Kubrick-Spielberg movie, AI: Artificial Intelligence (2001). Neither option—the creation of enemies or of slaves—is acceptable. If those are the likely outcomes, we would be unwise to create AIs and androids. Better to stick with enhancing our own abilities, and becoming more completely mutants and cyborgs. Still, whether it is wise or unwise to bring them into existence, all of these kinds of transhuman or posthuman entities could, in principle, be fully moral beings. They could have characteristics that demand our moral respect; they could, themselves, be capable of acting morally.

Whether or not any of them were still biologically Homo sapiens, or whether they were biological entities at all, our best way of treating mutants and cyborgs, AIs and androids would be to integrate them, as far as possible, into our own societies. They might become citizens alongside us, albeit unusual ones with impressive abilities. For practical purposes, we would do best to treat them all as human.

NOTES

(1.) Singer, The Expanding Circle: Ethics and Sociobiology (New York, 1991), pp. 3-4.

(2.) Singer, Writings on an Ethical Life (London, 2001), pp. 81-5.

(3.) Somerville, ‘How perfect do we want to be?’ and Hughes, ‘The human condition hurts: We’d be fools not to better it’, Globe and Mail, 29 August 2003.

(4.) John Searle is the best-known opponent of computationalist theories of consciousness. See, for example, his ‘Minds, Brains, and Programs’, Behavioral and Brain Sciences 3 (1980), pp. 417-24.

(5.) Chalmers, The Conscious Mind: In Search of a Fundamental Theory (Oxford, 1996), pp. 327-8.

 

Russell Blackford Ph.D. is a fellow of the IEET, an attorney, science fiction author and critic, philosopher, and public intellectual. Dr. Blackford serves as editor-in-chief of the IEET's Journal of Evolution and Technology. He lives in Newcastle, Australia, where he is a Conjoint Lecturer in the School of Humanities and Social Science at the University of Newcastle.



COMMENTS No comments

YOUR COMMENT Login or Register to post a comment.

Next entry: Astronomical Waste: The Opportunity Cost of Delayed Technological Development

Previous entry: The Future of Sex