Debate about the merits of enhancement tends to pretty binary. There are some — generally called bioconservatives — who are opposed to it; and others — transhumanists, libertarians and the like — who embrace it wholeheartedly. Is there any hope for an intermediate approach? One that doesn’t fall into the extremes of reactionary reject or uncritical endorsement?
Should animals be permitted to hunt and kill other animals? Some futurists believe that humans should intervene, and solve the “problem” of predator vs. prey once and for all. We talked to the man who wants to use radical ecoengineering to put an end to the carnage. A world without predators certainly sounds extreme, and it is. But British philosopher David Pearce can’t imagine a future in which animals continue to be trapped in the never-ending cycle of blind Darwinian processes.
In this entry, I take a look at Bostrom’s orthogonality thesis. As we shall see, this thesis is central to his claim that superintelligent AIs could pose profound existential risks to human beings. But what does the thesis mean and how plausible is it?
There may be as many as 80,000 American prisoners currently locked-up in a SHU, or segregated housing unit. Solitary confinement in a SHU can cause irreversible psychological effects in as little as 15 days. Here’s what social isolation does to your brain, and why it should be considered torture.
Overview of Advances Articulated in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013)  This article provides an overview of the research findings related to cognitive enhancement that are presented in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013), an encyclopedic textbook chronicling a plethora of recent advances in myriad areas of nanotechnology and nanomedicine. The final chapter discusses progress in nanomedical cognitive enhancement, where we find ourselves in a modern era in which many technologies appear to be on the cusp – helping to resolve pathologies while also having much future potential for the augmentation of human capabilities.
Dr. Kate Darling is a Research Specialist at the Massachusetts Institute of Technology (MIT) Media Lab and a Fellow at the Harvard Berkman Center for Internet & Society and the Yale Information Society Project.
Some futurists and science fiction writers predict that we’re on the cusp of a world-changing “Technological Singularity.” Skeptics say there will be no such thing. Today, I’ll be debating author Ramez Naam about which side is right.
The DARPA-funded program launches this month at two prestige locations, UC San Francisco (UCSF) and Massachusetts General Hospital (MGH). This $26 million, multi-institutional research was announced last October by the President as our best chance at reducing the damage caused by a wide range of brain disorders including Parkinson's Disease, Alzheimer's, and other dementia-related illnesses.
Mikey is a roboticist who is promoting the idea of Consciousness Hacking which, in the spirit of the Maker Movement, encourages people to build new tools for exploring and altering the way we think, feel and live.
More than 200 participants from North America, Europe and Asia met in post-Olympic Sochi for five days this April, as world-famous anti-aging researchers exchanged ideas at the third International Conference on Genetics of Aging and Longevity. They discussed progress and remaining obstacles, in their efforts to deepen our understanding of this complex phenomenon and develop strategies for interventions.
The Cryonics Society of Canada was created by Douglas Quinn in 1987. Two years prior, he became the first contracted Canadian cryonicist, and went on to be the president of the CSC (Cryonics Society of Canada), and editor of the Canadian Cryonics News . One of the early ideas in cryonics circles which he advocated for was the concept of permafrost burial  as a low cost alternative to standard cryopreservation by using areas of northern Canada where the ground never thaws at a certain depth.
In 1774, Goethe published the novel The Sorrows of Young Werther. The novel consists of a series of letters from a young, sensitive artist by the name of Werther. Over the course of these letters, we learn that Werther has become involved in a tragic love triangle. He believes that in order to resolve the love triangle, some member of it will have to die. Not being inclined to commit murder, Werther resolves to kill himself. This he duly does by shooting himself in the head.
One of the projects I worked on for the Institute for the Future's 2014 Ten-Year Forecast was Magna Cortica, a proposal to create an overarching set of ethical guidelines and design principles to shape the ways in which we develop and deploy the technologies of brain enhancement over the coming years. The forecast seemed to strike a nerve for many people—a combination of the topic and the surprisingly evocative name, I suspect. Alexis Madrigal at The Atlantic Monthly wrote a very good piece on the Ten-Year Forecast, focusing on Magna Cortica, and Popular Sciencesubsequently picked up on the story. I thought I'd expand a bit on the idea here, pulling in some of the material I used for the TYF talk.
Perhaps, as Prof. Stephen Hawking thinks, it may be difficult to “control” Artificial Intelligence (AI) in the long term. But perhaps we shouldn’t “control” the long-term development of AI, because that would be like preventing a child from becoming an adult, and that child is you.
This is going to be the final part in my series on Nicholas Agar’s book Truly Human Enhancement. In the most recent entry, I went through the first part of the argument in chapter 4. To briefly recap, that argument contends that radical enhancement may lead to the disintegration of personal identity (in either a metaphysical or evaluative sense).
If we extended our lives by 200 years, or if we succeeded in uploading our minds to an artificial substrate, would we undermine our sense of personal identity? If so, would it be wiser to avoid such radical forms of enhancement? These are the questions posed in chapter 4 of Nicholas Agar’s book Truly Human Enhancement. Over the next two posts I’ll take a look at Agar’s answers. This is all part of my ongoing series of reflections on Agar’s book.
Historians place the beginning of culture about 10,000 years ago, when our early ancestors abandoned hunter-gathering in favor of settling into communities, cultivating crops, and domesticating live stock.
I recently published an article in the Journal of Evolution and Technology on the topic of sex work and technological unemployment (available here, here and here). It began by asking whether sex work, specifically prostitution (as opposed to other forms of labour that could be classified as “sex work”, e.g. pornstar or erotic dancer), was vulnerable to technological unemployment. It looked at contrasting responses to that question, and also included some reflections on technological unemployment and the basic income guarantee.
This is the second post in my series on Nicholas Agar's new book Truly Human Enhancement. The book offers an interesting take on the enhancement debate. It tries to carve out a middle ground between bioconservatism and transhumanism, arguing that modest enhancement (within or slightly beyond the range of human norms) is prudentially valuable, but that radical enhancement (well beyond the range of human norms) may not be.
Nicholas Agar has written several books about the ethics of human enhancement. In his latest, Truly Human Enhancement, he tries to stake out an interesting middle ground in the enhancement debate. Unlike the bioconservatives, Agar is not opposed to the very notion of enhancing human capacities. On the contrary, he is broadly in favour it. But unlike the radical transhumanists, he does not embrace all forms of enhancement.
When the Partially Examined Lifediscussion of human enhancement (Episode 91) turned to the topic of digital technology, the philosophical oxygen was sucked out of the room. Sure, folks conceded that philosopher of mind Andy Clark (not mentioned by name, but implicitly referenced) has interesting things to say about how technology upgrades our cognitive abilities and extends the boundaries of where our minds are located. But everything else more or less was dismissed as concerning not terribly deep uses of “appliances”.
It seems almost as long as we could speak human beings have been arguing over what, if anything, makes us different from other living creatures. Mark Pagel’s recent book Wired for Culture: The Origins of the Human Social Mind is just the latest incantation of this millennia old debate, and as it has always been, the answers he comes up with have implications for our relationship with our fellow animals, and, above all, our relationship with one another, even if Pagel doesn’t draw many such implications.
Most of the ethical discussion of the use of stimulant drugs without a prescription in education has been negative, associating their use with performance enhancement in sports and with drug abuse. But the use of stimulants as study drugs actually has few side effects, and is almost entirely applied to the student’s primary obligation, academic performance. In this essay I consider some objections to off-label stimulant use, and to stimulant therapy for ADD, and argue that there are ethical arguments for the use of stimulants, and for future cognitively and morally enhancing therapies, in education, the work place, and daily life.
Humans are classified by biologists as Great Apes, along with orangutans, gorillas, chimpanzees, and bonobos. And geneticists inform us that we share 98 percent of our DNA with chimps. Yet all the Great Apes are in jeopardy. They are being wantonly killed, sometimes unnecessarily used for research, captured for zoos, and illegally sold as pets.
Today we enjoy basic conversations with our smart phone, desktop PC, games console, TV and soon, our car; but voice recognition, many believe, should not be viewed as an endgame technology. Although directing electronics with voice and gestures may be considered state-of-the-art today, we will soon be controlling entertainment and communications equipment not by talking or waving; but just by thinking!
The FDA is considering approving an experiment to repair a genetic disease in humans by creating embryos with DNA from three parents. Genes would be transferred from a healthy human egg to one that has a disease and the “repaired” egg then fertilized in the hope that a healthy baby will result. The goal of the experiment in genetic engineering is not a perfect baby but a healthy baby.
The science-fiction writer Isaac Asimov famously proposed three laws for all robots to follow: (1) a robot may not attack a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey the orders given to it by a human being except where such orders would conflict with the first rule; (3) a robot must protect its own existence as long as such protection does not conflict with the first two rules. These “laws,” though they sound just and logical, are utterly impossible to implement if the autonomous robots are to be intelligent and able to reprogram themselves.
In this essay, loosely interpreted into English by J. Hughes (who last studied French in 9th grade), IEET Affiliate Scholar Marc Roux explores what the core values and goals should be for transhumanists, and in particular for technoprogressives.