I argue that Heidegger’s account of technology as “enframing” is a helpful lens through which to understand the possible effects and dangers of transhumanism. Without resorting to nebulous concepts such as “dignity,” Heidegger’s analysis can help us understand how new technologies employed to modify the body, brain, and consciousness will enframe our own bodies and identities as something akin to “standing reserve.” Under transhumanism, the body is enframed as an external, technologically modifiable product. I indicate some of the problems that might arise when our own bodies no longer appear as central to our identity as embodied beings. Further, I argue that, by treating aspects of our own consciousness as technologically modifiable, we will be driven into a commodified and inauthentic relation to our identities. By examining the work of prominent transhumanists – including Brad Allenby, Daniel Sarewitz, and Andy Clark – I show how the threat that technology poses can be hidden when the essence of technology is not uncovered in a primordial way. I argue that by threatening to obscure death as a foundational possibility for Dasein, transhumanism poses the danger of hiding the need to develop a free and authentic relation to technology, Truth, and ultimately to Dasein itself.
The possibility of morally enhancing the behavior of individuals by means of drugs and genetic engineering has been the object of intense philosophical discussion over the last few years. However, although moral enhancement may turn out to be useful to promote cooperation in some areas of human interaction, it will not promote cooperation in the domain of international relations in those areas that are critical to state security. Unlike some moral enhancement theorists, I argue that, because of the structure of the system of states, moral enhancement cannot be used to avert such major threats to humankind as terrorism and nuclear conflict. My analysis of the political implications of moral enhancement is pursued through a critical discussion of two different versions of political realism, namely human nature realism and structural realism. I conclude that, as far as major threats to the survival of humankind are concerned, moral enhancement can at most be used as a means to change the present structure of the system of states.
Intelligent systems and devices are at the forefront of technological innovation and hold particular appeal for the creative imagination. Their appearance in the arts, fiction, and film allow one to glean insights into apprehensions regarding the contemporary human condition and concerns for its future. This study examines the loss of life and the absencing of the other as embodied in intelligent devices, as they are presented in three current, popular films. In these films, human fallibility and mortality provide the raison d’être for the creation and development of intelligent devices and systems. Moreover, the power that these technologies hold is in their evocation of lost life and human transience, and in the manner by which they both conceal and reveal absence and loss.
The Transhumanist Wager may inspire some useful debates among transhumanists and others concerned with future of humanity, but I can’t wish it any influence on their thinking. I certainly hope it won’t be taken by outsiders as an accurate picture of transhumanism as a philosophy or a social movement.
Is it possible to create an artificial mind? Can a human or other biological mind be uploaded into computer hardware? Should these sorts of artificial intelligences be created, and under what circumstances? Would the AIs make the world better off? These and other deep but timely questions are raised by the recent film Transcendence (dir. Wally Pfister, 2014).
A stream in transhumanism argues that the aims of Buddhism and transhumanists are akin. It is the case that transhumanism contains religious tropes, and its parallels to Christianity are readily apparent. It does not share much, however, with Buddhism’s Zen tradition. Zen tends to focus its practitioners on becoming fully present and human, not on becoming transcendent, super-powered, or posthuman. This paper explores some of the tensions between transhumanism and Buddhism through the lens of Zen, and suggests that transhumanist Buddhists should be careful not to conflate moments of spiritual enlightenment with permanent techno-social transcendence.
The pleasure principle (PP) may be a verifiable fundamental law of the living matter in the universe, and this law might then be used for forecasting human self-evolution. I do not pretend to “prove” PP, but argue that it must be regarded as a scientific hypothesis. Accordingly, I formulate verifiable and falsifiable postulates of PP. Their confirmation would allow the construction of a new scientific discipline, hedodynamics, that would be able to forecast the future development of human civilization and even the probable structure and psychology of other rational beings within the universe. I suggest basic hedodynamical scenarios for human (posthuman) civilization and argue that the discovery of the neural correlate of pleasure would provide more detailed forecasts. In particular, I demonstrate how the studies of pleasure mechanisms might predict the degree of aggression in future societies. I conclude that PP may become a scientific basis for fundamental, not phenomenological (based on extrapolations), future forecasting on large timescales.
There exists a real dearth of literature available to Anglophones dealing with philosophical connections between transhumanism and Marxism. This is surprising, given the existence of works on just this relation in the other major European languages and the fact that 47 per cent of people surveyed in the 2007 Interests and Beliefs Survey of the Members of the World Transhumanist Association identified as “left,” though not strictly Marxist (Hughes 2008). Rather than seeking to explain this dearth here, I aim to contribute to its being filled in by identifying three fundamental areas of similarity between transhumanism and Marxism. These are: the importance of material conditions and particularly, technological advancement, for revolution; conceptions of human nature; and conceptions of nature in general. While it is true that both Marxism and (especially) transhumanism are broad fields that encompass diverse positions, even working with somewhat generalized characterizations of the two reveals interesting parallels and dissimilarities fruitful for future work.
This comparison also shows that transhumanism and Marxism can learn important lessons from one another that are complementary to their respective projects. I suggest that Marxists can learn from transhumanists two lessons: that some “natural” forces may become reified forces and the extent to which the productive apparatus is now relevant to revolution. Transhumanists, on the other hand, can learn from Marxist theory the essentially social nature of the human being and the ramifications this has for the transformation of the human condition and for the forms of social organization compatible with transhumanist aims. Transhumanists can also benefit from considering the relevance of Marx’s theory of alienation to their goals of technological advancement.
Is sex work (specifically, prostitution) vulnerable to technological unemployment? Several authors have argued that it is. They claim that the advent of sophisticated sexual robots will lead to the displacement of human prostitutes, just as, say, the advent of sophisticated manufacturing robots have displaced many traditional forms of factory labour. But are they right? In this article, I critically assess the argument that has been made in favour of this displacement hypothesis. Although I grant the argument a degree of credibility, I argue that the opposing hypothesis -- that prostitution will be resilient to technological unemployment -- is also worth considering. Indeed, I argue that increasing levels of technological unemployment in other fields may well drive more people into the sex work industry. Furthermore, I argue that no matter which hypothesis you prefer -- displacement or resilience -- you can make a good argument for the necessity of a basic income guarantee, either as an obvious way to correct for the precarity of sex work, or as a way to disincentivise those who may be drawn to prostitution.
This article explores the impact of both technological unemployment and a basic income on the provision of services of general interest. A basic income may promote the restructuring of production into postcapitalist forms and projects involving peer production. This change, as well as technological unemployment, will result in lower state and market capacities to provide services. Instead, people will create various forms of self-organization to meet their needs. The paper presents examples of such models. Some ideas about the new forms of inequalities in this system will be presented to inspire a further study of this scenario.
The aim of this article is to explore the possible futures generated by the development of artificial intelligence. Our focus will be on the social consequences of automation and robotisation, with special attention being paid to the problem of unemployment. In spite of the fact that this investigation is mainly speculative in character, we will try to develop our analysis in a methodologically sound way. To start, we will make clear that the relation between technology and structural unemployment is still controversial. Therefore, the hypothetical character of this relation must be fully recognized. Secondly, as proper scenario analysis requires, we will not limit ourselves to predict a unique future, but we will extrapolate from present data at least four different possible developments: 1) unplanned end of work scenario; 2) planned end of robots scenario; 3) unplanned end of robots scenario, and 4) planned end of work scenario. Finally, we will relate the possible developments not just to observed trends but also to social and industrial policies presently at work in our society which may change the course of these trends.
The aim of this investigation is to determine if there is a relation between automation and unemployment within the Italian socio-economic system. Italy is Europe’s second nation and the fourth in the world in terms of robot density, and among the G7 it is the nation with the highest rate of youth unemployment. Establishing the ultimate causes of unemployment is a very difficult task, and the notion itself of ‘technological unemployment’ is controversial. Mainstream economics tends to relate the high rate of unemployment that characterises Italian society with the low flexibility of the labour market and the high cost of manpower. Little attention is paid to the impact of artificial intelligence on the level of employment. With reference to statistical data, we will try to show that automation can be seen at least as a contributory cause of unemployment. In addition, we will argue that both Luddism and anti-Luddism are two faces of the same coin. In both cases attention is focused on technology itself (the means of production) instead of on the system (the mode of production). Banning robots or denying the problems of robotisation are not effective solutions. A better approach would consist in combining growing automation with a more rational redistribution of income.
The notion that humans have a right to basic capital or to a basic income guarantee by virtue of their existence can be traced to the Enlightenment. Many of the suggestions inherent in modern proposals for basic income or basic capital originated with four forerunners in the Anglo-American tradition: Gerrard Winstanley, Thomas Paine, Thomas Skidmore, and Edward Bellamy. All four embraced the notion that the equal moral considerability of all humans implied an equal right to the resources needed to survive, and were subjected to withering criticism of their ideals on the grounds that the provision of basic resources conflicted with rather than enhanced freedom.
Robotics and artificial intelligence are beginning to fundamentally change the relative profitability and productivity of investments in capital versus human labor, creating technological unemployment at all levels of the workforce, from the North to the developing world. As robotics and expert systems become cheaper and more capable the percentage of the population that can find employment will also fall, stressing economies already trying to curtail "entitlements" and adopt austerity. Two additional technology-driven trends will exacerbate the structural unemployment crisis in the coming decades, desktop manufacturing and anti-aging medicine. Desktop manufacturing threatens to disintermediate the half of all workers involved in translating ideas into products in the hands of consumers, while anti-aging therapies will increase the old age dependency ratio of retirees to tax-paying workers. Policies that are being proposed to protect or create employment will have only a temporary moderating effect on job loss. Over time these policies, which will raise costs, lower the quality of goods and services, and lower competitiveness, will become fiscally impossible and lose political support. In order to enjoy the benefits of technological innovation and longer, healthier lives we will need to combine policies that control the pace of replacing paid human labor with a universal basic income guarantee (BIG) provided through taxation and the public ownership of wealth. The intensifying debate over the reform of "entitlements" will be the strategic opening for a campaign for BIG to replace disability and unemployment insurance, Social Security, and other elements of the welfare state.
Gary E. Marchant, Yvonne A. Stevens and James M. Hennessy
There is growing concern that emerging technologies such as computers, robotics and artificial intelligence are displacing human jobs, creating an epidemic of “technological unemployment.” While this projection has yet to be confirmed, if true it will have major economic and social repercussions for our future. It is therefore appropriate to begin identifying policy options to address this potential problem. This article offers an economic and social framework for addressing this problem, and then provides an inventory of possible policy options organized into the following six categories: (a) slowing innovation and change; (b) sharing work; (c) making new work; (d) redistribution; (e) education; and (f) fostering a new social contract.
The paper rehearses arguments for and against the prediction of massive technological unemployment. The main argument in favor is that robots are entering a large number of industries, making more expensive human labor redundant. The main argument against the prediction is that for two hundred years we have seen a massive increase in productivity with no long term structural unemployment caused by automation. The paper attempts to move past this argumentative impasse by asking what humans contribute to the supply side of the economy. Historically, humans have contributed muscle and brains to production but we are now being outcompeted by machinery, in both areas, in many jobs. It is argued that this supports the conjecture that massive unemployment is a likely result. It is also argued that a basic income guarantee is a minimal remedial measure to mitigate the worst effects of technological unemployment.
The question is a simple one: if in the future robots take most people’s jobs, how will human beings eat? The answer that has been more or less obvious to most of those who have taken the prospect seriously has been that society’s wealth would need to be re-distributed to support everyone as a citizen’s right. That is the proposition we used to frame this special issue of the journal, and the contributors have explored new and important dimensions of the equation.
Physical human enhancement is usually perceived as a morally insignificant topic, especially in the rare instance when it is considered outside the realm of competitive sport. Nick Bostrom explains the physical enhancement literature’s narrow focus by noting that “the value of such enhancement outside the sporting and cosmetic arenas is questionable” (2008, 131). In the present paper, I argue that this perception is a result of limitations inherent to the ethical paradigms under which bioethical analysis is commonly done. It is unsurprisingly difficult to find moral value in brute physical capacity when we tend to attach the tags “moral” and “ethical” only to interpersonal, especially altruistic, relations. I proceed to describe Aristotle’s ethical paradigm as having a wider scope, and present his apparently self-contradictory views on the moral value of physical excellence. I then sketch a modified Aristotelian theory, which consistently affirms the value of human physical and mental activity alike, and show how an Aristotelian emphasis on human function can reveal physical human enhancement to be a tap into intrinsic moral value.
This paper examines the responses to advanced and transformative technologies in military literature, attenuates the conclusions of earlier work suggesting that there is an “ignorance of transhumanism” in the military, and updates the current layout of transhuman concerns in military thought. The military is not ignorant of transhuman issues and implications, though there was evidence for this in the past; militaries and non-state actors (including terrorists) increasingly use disruptive technologies with what we may call transhuman provenance.
Beginning as pockets of anaerobic bacteria subsisting on geothermal energy on the ocean floor, life expanded first throughout the ocean, then over the land, and eventually came to cover the entire Earth. In this paper, I argue that human activity in outer space should be understood in the context of this progression: life as an exponentially expanding force of negentropy currently contained within the atmosphere of the earth, and human technology as a radical transformation whereby life becomes capable of expanding over this limit. With reference to the philosophy of Krafft Ehricke, I argue that this position represents a synthesis between deep ecology and technological civilization: as with deep ecology, human beings are seen as having duties toward life; however, these duties consist not only in protecting the biosphere, but also in developing techno-biological living systems capable of reproducing in the ambient matter of the solar system.
As you may be able to tell from the title of this afterword, I am a Star Trek fan (aka a “Trekkie”); I was always fascinated by the concept of the “Vulcan mind meld.” And now, technologies that may enable us to open “a window into the movies in our minds” are becoming a reality.
Children surviving neural injuries face challenges not seen by their adult counterparts, namely that they experience neural injury before reaching neurodevelopmental maturity. Neural prostheses offer one possible path to recovery, along with the potential for functional outcomes that could exceed expectations. Although the first cochlear implant was placed more than fifty years ago, the field of neuroprosthetics is still relatively young. Several types of neural prostheses are in development stages ranging from animal models to (adult) human trials. In this paper, I discuss how neural prostheses may assist recovery for children surviving neural injury. I argue that approaching the use of neural prosthetics in children with considerations derived from transhumanism alongside traditional bioethics can provide an opportunity to reframe adult-focused ethics toward a child/family focus and to strip away the prejudicial metaphor of cyborgization.
While it seems unlikely that any method of guaranteeing human-friendliness (“Friendliness”) on the part of advanced Artificial General Intelligence (AGI) systems will be possible, this doesn’t mean the only alternatives are throttling AGI development to safeguard humanity, or plunging recklessly into the complete unknown. Without denying the presence of a certain irreducible uncertainty in such matters, it is still sensible to explore ways of biasing the odds in a favorable way, such that newly created AI systems are significantly more likely than not to be Friendly. Several potential methods of effecting such biasing are explored here, with a particular but non-exclusive focus on those that are relevant to open-source AGI projects, and with illustrative examples drawn from the OpenCog open-source AGI project. Issues regarding the relative safety of open versus closed approaches to AGI are discussed and then nine techniques for biasing AGIs in favor of Friendliness are presented.
Rapid neuroscientific advancement over the past 20 years has led to increased ethical, legal and social issues that are not confined to the academic world, but also are part of public discourse. There are questions on the use of neuroscientific techniques and novel neurotechnologies that are generated as we learn more about the brain and its relations to consciousness, emotion, behavior and the nature of self and relation to others. Should neuroscience and neurotechnology be used to advance humanity; or will it be engaged as demiurge and ultimately push humanity towards some new, and perhaps unanticipated reality? Irrespective of valence, the trajectory of neuroscience and neurotechnology will lead to a more neurocentrically-dominated future. How will we address and navigate the possibilities and problems that this neurocentricism fosters? The emerging field of neuroethics may enable a more pragmatic understanding of these issues and perhaps lead to a more prudent resolution of the questions and problems that arise at the intersection of neuroscience, neurotechnology and society. The two traditions of neuroethics – the study of the neural mechanisms of moral cognition and actions (neuromorality), and addressing the ethical and legal issues instantiated by applications of neuroscience and technology in the social sphere, may afford a meta-ethics that will be of benefit at both individual and societal levels. Yet, we posit that in order to meet these challenges, neuroethics must be international, multi-cultural and multi-disciplinary, and not simply bound to philosophical dogma or defined by western ethical discourse. Moreover, neuroethics must be not be an “after-the-fact” reflection or analysis, but should be engaged while neuroscientific and neurotechnological advances are still relatively nascent in order to be ready for the reciprocal effects of neuroscience and neurotechnologies enacted, and as influenced by socio-culture on the world stages.
The human brain is in great part what it is because of the functional and structural properties of the 100 billion interconnected neurons that form it. These make it the body’s most complex organ, and the one we most associate with concepts of selfhood and identity. The assumption held by many supporters of human enhancement, transhumanism, and technological posthumanity seems to be that the human brain can be continuously improved, as if it were another one of our machines. In this paper, I focus on some of the ethical issues that we should keep in mind when thinking about memory enhancement interventions. I start with an overview of one of the most precious capacities of the brain, namely memory. Then I analyze the different kinds of memory interventions that exist or are under research. Finally, I point out the issues that we should not forget when we consider enhancing our memories. In this regard, my argument is not against memory enhancement interventions; rather, it concentrates on the need to “keep in mind” what kind of enhancements we want. We should consider whether we want the kind of “enhancements” that will end up making us lose synapse connections, or the kind that promote more use of them.
This paper affirms human enhancement in principle, but questions the inordinate attention paid to two particular forms of enhancement: life extension and raising IQ. The argument is not about whether these enhancements are possible or not; instead, I question the aspirations behind the denial of death and the stress on one particular type of intelligence: the logico-analytic. Death is a form of finitude, and finitude is a crucially defining part of human life. As for intelligence, Howard Gardner and Daniel Goleman show us the importance of multiple intelligences. After clarifying the notion of different psychological types, the paper takes five specimens of a distinct type and then studies the traits of that type through their examples. Seeking a pattern connecting those traits, the paper finds them bound together by the embrace of the computational metaphor for human cognition and then argues that the computational metaphor does not do a good job of describing human intelligence. Enlisting the works of Jaron Lanier and Ellen Ullman, the paper ends with a caution against pushing human intelligence toward machine intelligence, and points toward the human potential movement as a possible ally and wise guide for the transhumanist movement.
In this paper, I investigate suspension under two guises: digital and pharmaceutical. These two versions of suspension interrogate the limits of the body to different extents. The former highlights our increasing desire and need to externalize and supplement what our physical bodies are incapable of doing – perfect, un-influenced storage capacity. The latter example illustrates the continued need for the physical body, but shows that the demands on the body are changed with age or desire to activate or suppress biological processes.
The transhumanism project will gain momentum with advances in technology, in basic science and in philosophy, as well as in bioethics. However, there are minefields that jeopardize this progress – one such minefield is a fundamental problem in pure philosophy: fictional entities and how we refer to the nonexistent. In the absence of solutions to the problems that arise in this area of philosophy, progress in the technology necessary for augmented reality will be considerably impeded. I will argue there are forms of augmented reality that are metaphysically impossible and that believing that such forms are possible (both metaphysically and physically) creates a form of skepticism.
Enhancement technologies may someday grant us capacities far beyond what we now consider humanly possible. Nick Bostrom and Anders Sandberg suggest that we might survive the deaths of our physical bodies by living as computer emulations. In 2008, they issued a report, or “roadmap,” from a conference where experts in all relevant fields collaborated to determine the path to “whole brain emulation.” Advancing this technology could also aid philosophical research. Their “roadmap” defends certain philosophical assumptions required for this technology’s success, so by determining the reasons why it succeeds or fails, we can obtain empirical data for philosophical debates regarding our mind and selfhood. The scope ranges widely, so I merely survey some possibilities, namely, I argue that this technology could help us determine (1) if the mind is an emergent phenomenon, (2) if analog technology is necessary for brain emulation, and (3) if neural randomness is so wild that a complete emulation is impossible.