Physical human enhancement is usually perceived as a morally insignificant topic, especially in the rare instance when it is considered outside the realm of competitive sport. Nick Bostrom explains the physical enhancement literature’s narrow focus by noting that “the value of such enhancement outside the sporting and cosmetic arenas is questionable” (2008, 131). In the present paper, I argue that this perception is a result of limitations inherent to the ethical paradigms under which bioethical analysis is commonly done. It is unsurprisingly difficult to find moral value in brute physical capacity when we tend to attach the tags “moral” and “ethical” only to interpersonal, especially altruistic, relations. I proceed to describe Aristotle’s ethical paradigm as having a wider scope, and present his apparently self-contradictory views on the moral value of physical excellence. I then sketch a modified Aristotelian theory, which consistently affirms the value of human physical and mental activity alike, and show how an Aristotelian emphasis on human function can reveal physical human enhancement to be a tap into intrinsic moral value.
This paper examines the responses to advanced and transformative technologies in military literature, attenuates the conclusions of earlier work suggesting that there is an “ignorance of transhumanism” in the military, and updates the current layout of transhuman concerns in military thought. The military is not ignorant of transhuman issues and implications, though there was evidence for this in the past; militaries and non-state actors (including terrorists) increasingly use disruptive technologies with what we may call transhuman provenance.
Beginning as pockets of anaerobic bacteria subsisting on geothermal energy on the ocean floor, life expanded first throughout the ocean, then over the land, and eventually came to cover the entire Earth. In this paper, I argue that human activity in outer space should be understood in the context of this progression: life as an exponentially expanding force of negentropy currently contained within the atmosphere of the earth, and human technology as a radical transformation whereby life becomes capable of expanding over this limit. With reference to the philosophy of Krafft Ehricke, I argue that this position represents a synthesis between deep ecology and technological civilization: as with deep ecology, human beings are seen as having duties toward life; however, these duties consist not only in protecting the biosphere, but also in developing techno-biological living systems capable of reproducing in the ambient matter of the solar system.
As you may be able to tell from the title of this afterword, I am a Star Trek fan (aka a “Trekkie”); I was always fascinated by the concept of the “Vulcan mind meld.” And now, technologies that may enable us to open “a window into the movies in our minds” are becoming a reality.
Children surviving neural injuries face challenges not seen by their adult counterparts, namely that they experience neural injury before reaching neurodevelopmental maturity. Neural prostheses offer one possible path to recovery, along with the potential for functional outcomes that could exceed expectations. Although the first cochlear implant was placed more than fifty years ago, the field of neuroprosthetics is still relatively young. Several types of neural prostheses are in development stages ranging from animal models to (adult) human trials. In this paper, I discuss how neural prostheses may assist recovery for children surviving neural injury. I argue that approaching the use of neural prosthetics in children with considerations derived from transhumanism alongside traditional bioethics can provide an opportunity to reframe adult-focused ethics toward a child/family focus and to strip away the prejudicial metaphor of cyborgization.
While it seems unlikely that any method of guaranteeing human-friendliness (“Friendliness”) on the part of advanced Artificial General Intelligence (AGI) systems will be possible, this doesn’t mean the only alternatives are throttling AGI development to safeguard humanity, or plunging recklessly into the complete unknown. Without denying the presence of a certain irreducible uncertainty in such matters, it is still sensible to explore ways of biasing the odds in a favorable way, such that newly created AI systems are significantly more likely than not to be Friendly. Several potential methods of effecting such biasing are explored here, with a particular but non-exclusive focus on those that are relevant to open-source AGI projects, and with illustrative examples drawn from the OpenCog open-source AGI project. Issues regarding the relative safety of open versus closed approaches to AGI are discussed and then nine techniques for biasing AGIs in favor of Friendliness are presented.
Rapid neuroscientific advancement over the past 20 years has led to increased ethical, legal and social issues that are not confined to the academic world, but also are part of public discourse. There are questions on the use of neuroscientific techniques and novel neurotechnologies that are generated as we learn more about the brain and its relations to consciousness, emotion, behavior and the nature of self and relation to others. Should neuroscience and neurotechnology be used to advance humanity; or will it be engaged as demiurge and ultimately push humanity towards some new, and perhaps unanticipated reality? Irrespective of valence, the trajectory of neuroscience and neurotechnology will lead to a more neurocentrically-dominated future. How will we address and navigate the possibilities and problems that this neurocentricism fosters? The emerging field of neuroethics may enable a more pragmatic understanding of these issues and perhaps lead to a more prudent resolution of the questions and problems that arise at the intersection of neuroscience, neurotechnology and society. The two traditions of neuroethics – the study of the neural mechanisms of moral cognition and actions (neuromorality), and addressing the ethical and legal issues instantiated by applications of neuroscience and technology in the social sphere, may afford a meta-ethics that will be of benefit at both individual and societal levels. Yet, we posit that in order to meet these challenges, neuroethics must be international, multi-cultural and multi-disciplinary, and not simply bound to philosophical dogma or defined by western ethical discourse. Moreover, neuroethics must be not be an “after-the-fact” reflection or analysis, but should be engaged while neuroscientific and neurotechnological advances are still relatively nascent in order to be ready for the reciprocal effects of neuroscience and neurotechnologies enacted, and as influenced by socio-culture on the world stages.
The human brain is in great part what it is because of the functional and structural properties of the 100 billion interconnected neurons that form it. These make it the body’s most complex organ, and the one we most associate with concepts of selfhood and identity. The assumption held by many supporters of human enhancement, transhumanism, and technological posthumanity seems to be that the human brain can be continuously improved, as if it were another one of our machines. In this paper, I focus on some of the ethical issues that we should keep in mind when thinking about memory enhancement interventions. I start with an overview of one of the most precious capacities of the brain, namely memory. Then I analyze the different kinds of memory interventions that exist or are under research. Finally, I point out the issues that we should not forget when we consider enhancing our memories. In this regard, my argument is not against memory enhancement interventions; rather, it concentrates on the need to “keep in mind” what kind of enhancements we want. We should consider whether we want the kind of “enhancements” that will end up making us lose synapse connections, or the kind that promote more use of them.
This paper affirms human enhancement in principle, but questions the inordinate attention paid to two particular forms of enhancement: life extension and raising IQ. The argument is not about whether these enhancements are possible or not; instead, I question the aspirations behind the denial of death and the stress on one particular type of intelligence: the logico-analytic. Death is a form of finitude, and finitude is a crucially defining part of human life. As for intelligence, Howard Gardner and Daniel Goleman show us the importance of multiple intelligences. After clarifying the notion of different psychological types, the paper takes five specimens of a distinct type and then studies the traits of that type through their examples. Seeking a pattern connecting those traits, the paper finds them bound together by the embrace of the computational metaphor for human cognition and then argues that the computational metaphor does not do a good job of describing human intelligence. Enlisting the works of Jaron Lanier and Ellen Ullman, the paper ends with a caution against pushing human intelligence toward machine intelligence, and points toward the human potential movement as a possible ally and wise guide for the transhumanist movement.
In this paper, I investigate suspension under two guises: digital and pharmaceutical. These two versions of suspension interrogate the limits of the body to different extents. The former highlights our increasing desire and need to externalize and supplement what our physical bodies are incapable of doing – perfect, un-influenced storage capacity. The latter example illustrates the continued need for the physical body, but shows that the demands on the body are changed with age or desire to activate or suppress biological processes.
The transhumanism project will gain momentum with advances in technology, in basic science and in philosophy, as well as in bioethics. However, there are minefields that jeopardize this progress – one such minefield is a fundamental problem in pure philosophy: fictional entities and how we refer to the nonexistent. In the absence of solutions to the problems that arise in this area of philosophy, progress in the technology necessary for augmented reality will be considerably impeded. I will argue there are forms of augmented reality that are metaphysically impossible and that believing that such forms are possible (both metaphysically and physically) creates a form of skepticism.
Enhancement technologies may someday grant us capacities far beyond what we now consider humanly possible. Nick Bostrom and Anders Sandberg suggest that we might survive the deaths of our physical bodies by living as computer emulations. In 2008, they issued a report, or “roadmap,” from a conference where experts in all relevant fields collaborated to determine the path to “whole brain emulation.” Advancing this technology could also aid philosophical research. Their “roadmap” defends certain philosophical assumptions required for this technology’s success, so by determining the reasons why it succeeds or fails, we can obtain empirical data for philosophical debates regarding our mind and selfhood. The scope ranges widely, so I merely survey some possibilities, namely, I argue that this technology could help us determine (1) if the mind is an emergent phenomenon, (2) if analog technology is necessary for brain emulation, and (3) if neural randomness is so wild that a complete emulation is impossible.
Objections to uploading may be parsed into substrate issues, dealing with the computer platform of upload and personal identity. This paper argues that the personal identity issues of uploading are no more or less challenging than those of bodily transfer often discussed in the philosophical literature. It is argued that what is important in personal identity involves both token and type identity. While uploading does not preserve token identity, it does save type identity; and even qua token, one may have good reason to think that the preservation of the type is worth the cost.
There is a debate about the possibility of mind-uploading – a process that purportedly transfers human minds and therefore human identities into computers. This paper bypasses the debate about the metaphysics of mind-uploading to address the rationality of submitting yourself to it. I argue that an ineliminable risk that mind-uploading will fail makes it prudentially irrational for humans to undergo it.
This special issue of JET deals with questions relating to our radically enhanced future selves or our possible “mind children” – conscious beings that we might bring about through the development of advanced computers and robots. Our mind children might exceed human levels of cognition, and avoid many human limitations and vulnerabilities.
Transhumanist visions appear to aim at invulnerability. We are invited to fight the dragon of death and disease, to shed our old, human bodies, and to live on as invulnerable minds or cyborgs. This paper argues that even if we managed to enhance humans in one of these ways, we would remain highly vulnerable entities given the fundamentally relational and dependent nature of posthuman existence. After discussing the need for minds to be embodied, the issue of disease and death in the infosphere, and problems of psychological, social and axiological vulnerability, I conclude that transhumanist human enhancement would not erase our current vulnerabilities, but instead transform them. Although the struggle against vulnerability is typically human and would probably continue to mark posthumans, we had better recognize that we can never win that fight and that the many dragons that threaten us are part of us. As vulnerable humans and posthumans, we are at once the hero and the dragon.