Why do people torture others? Why do they march others into gas chambers? Because some are psychopaths or sadists or power hungry. Depravity is in their DNA. Some are not inherently depraved but believe the situation demands torture. If others are evil and we are good, then we should kill and torture them with impunity. Such ideas result from the demonization of others, from a simplistic worldview in which good battles evil. If others torture, they are war criminals; if we torture are motives are pure. But the world is more nuanced than this. There is good and evil within us all.
The question that motivates this essay is “Can we build a benevolent AI, and how do we get around the problem that humans, bless their cotton socks, can’t define ‘benevolence’?” A lot of people want to emphasize just how many different definitions of “benevolence” there are in the world — the point, of course, being that humans are very far from agreeing a universal definition of benevolence, so how can we expect to program something we cannot define into an AI?
My son recently shared an interesting idea. Suppose we cryogenically preserve ourselves and send our bodies and brains into space, or simply leave them on earth to be reanimated. Even if advanced beings find us in the future and want to awaken us, there is a good chance that our minds will be too primitive to be rebooted. Our futuristic descendants may not have technology compatible with our primitive mind files. It would be as if we come across an old floppy disk or early telephone but no longer had the technology to run them.
If predictions by future thinkers such as Aubrey de Grey, Robert Freitas, and Ray Kurzweil ring true – that future science will one day eliminate the disease of aging – then it makes sense to consider the repercussions a non-aging society might place on our world.
Police body cameras are all the rage lately. Al Sharpton wants them used to monitor the activities of cops. Ann Coulter wants them used to “shut down” Al Sharpton. The White House wants them because, well, they’re a way to look both “tough on police violence” and “tough on crime” by spending $263 million on new law enforcement technology.
Why do we punish others? There are many philosophical answers to that question. Some claim that we punish in order to incapacitate a potential wrongdoer; some claim that we do it in order to rehabilitate an offender; some claim that we do it in order to deter others; and some claim that we do it because wrongdoers simply deserve to be punished. Proponents of the last of these views are called retributivists. They believe that punishment is an intrinsic good, and that it ought to be imposed in order to ensure that justice is done. Proponents of the other views are consequentialists. They think that punishment is an instrumental good, and that its worth has to be assessed in terms of the ends it helps us to achieve.
There are several reasons why creating a superintelligent mind could bring about an existential catastrophe. For example, the AI could be malicious, or unfriendly, a scenario that I call the amity-enmity problem. It looms large in Nick Bostrom’s recent book Superintelligence, in which Bostrom suggests that we should recognize "doom" as the "default outcome" of creating a superintelligence. And AI could also be apathetic about our well-being and continued survival. Perhaps it wants to convert the entire surface of earth into solar panels (an example that Bostrom mentions), and as a result it annihilates the biosphere. Let’s call this the indifference problem.
What responsibility do we have for the things we make? At its root, this is a fairly straightforward science story. Neuroscience researchers at the University of Rochester and the University of Copenhagen successfully transplanted human glial progenitor cells (hGPCs) into a newborn mouse (here's the technical article in The Journal of Neuroscience, and the lay-friendly version in New Scientist). While glial cells are generally considered a support cell in the brain, positioning, feeding, insulating, and protecting neurons, they also help neurons make synaptic connections.
A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge want they saw as a flawed narrative surrounding recent advances in artificial intelligence.
Will robots pose exceptional challenges for the law? That’s the question taken up in Ryan Calo’s recent article “Robotics and the Lessons of Cyberlaw”. As noted in the previous entry, Calo thinks that robots have three distinguishing features: (i) embodiment (i.e. they are mechanical agents operating in the real world); (ii) emergence (i.e. they don’t simply perform routine operations, but are programmed to acquire and develop new behaviours); and (iii) social meaning (i.e. we anthropomorphise and attach social meaning to them). So when Calo asks whether robots pose exceptional challenges for the legal system, he asks in light of those three distinguishing features.
Hayles has written a complex and erudite book on the hidden premises and visible consequences of the information age. Ultimately, her thesis is summarized by a sentence in the prologue: “thought is a much broader cognitive function depending for its specificities on the embodied form enacting it”. Rewritten in plain English, it means that you cannot separate your “i” from the body that you inhabit. Her nightmare is “a culture inhabited by posthumans who regard their bodies as fashion accessories rather than the ground of being”. Her dream is a society in which we “understand ourselves as embodied creatures living within and through embodied worlds and embodied words.”
The US neurophysiologist Paul Nunez previously wrote “Electric Fields of the Brain” (1981) and “Neocortical Dynamics and Human EEG Rhythms” (1995), and in fact his credentials in the field of brain studies harken back to a paper originally written in 1972 and ambitiously titled “The Brain Wave Equation” (an equation that eventually he resurrects in this book, 40 years later). In this book Nunez summarizes his novel ideas on the way that “brains cause minds” (to use Searle’s expression).
Whatever a transhuman is, xe (a pronoun to encompass all conceivable states of personhood) will have to live in a world that enables xer to be transhuman. I’ll explore the impact of three likely-seeming aspects of that world: ubiquitous interconnected smart machines, continuous classification, and virtualism.
With futurist thinkers supporting the notion of human upgrading through technological enhancement, what parameters are considered in respect to moral enhancement? What cross cultural barriers and variations in moral reasoning are we targeting for such upgrades? Moreover, is moral enhancement simply a term we fear delving into despite the association it arguably has to almost everything our culture produces?
The “Singularity” seems to have become a new lucrative field for the struggling publishing industry (and, i am sure, soon, for the equally struggling Hollywood movie studios). To write a bestseller, you have to begin by warning that machines more intelligent than humans are coming soon. That is enough to get everybody’s attention.
In a powerful article at the Atlantic, “Why I Hope to Die at 75,” Dr. Ezekiel Emanuel lined up facts and figures showing that much of the recent gain in human lifespan is about stretching out the process of decline and death rather than living well for longer. Most of us would love to live to 100 and beyond with our minds sharp and our senses clear, able to take pleasure in the world around us while contributing at least modestly to the happiness and wellbeing of others. But clear-eyed analysis shows that is not how most elderly Americans experience their final years.
I am a transhumanist, and I believe that politics is important. Let me unpack that a little: I believe that we can and should voluntarily improve the human condition using technology. That makes me a transhumanist, but aside from that single axiom I have in common with all transhumanists, we’re an increasingly diverse bunch.
Jeremy Bentham’s panopticon is the classic symbol of authoritarianism. Bentham, a revolutionary philosopher and social theorist, adapted the idea from his brother Samuel. The panopticon was a design for a prison. It would be a single watchtower, surrounded by a circumference of cells. From the watchtower a guard could surveil every prisoner, whilst at the same time being concealed from their view. The guard could be on duty or not.
This time, let’s veer into an area wherein I actually know a thing or two! The matter of whether humanity might someday… or even should… meddle in other creatures on this planet and bestow upon them the debatable “gift” of full sapience—the ability to argue, ponder, store information, appraise, discuss, create, express and manipulate tools, so that they might join us in the problematic task of being worthy planetary managers.
This is the second and final part of my series about a recent exchange between David Chalmers and Massimo Pigliucci. The exchange took place in the pages of Intelligence Unbound, an edited collection of essays about mind-uploading and artificial intelligence. It concerned the philosophical plausibility of mind-uploading.
As we head deeper into the 21st century, we’re starting to catch a glimpse of the fantastic technological possibilities that await. But we’re also starting to get a grim sense of the potential horrors. Here are 10 frightening technologies that should never, ever, come into existence.
Ethicists have been asking themselves a question over the last couple of years that seems to come right out of science-fiction. Is it possible to make moral machines, or in their lingo, autonomous moral agents -AMAs? Asking the question might have seemed silly not so long ago, or so speculative as risk obtaining tenure, but as the revolution in robotics has rolled forward it has become an issue necessary to grapple with and now.
“Virtually Human explores what the not-too-distant future will look like when cyberconsciousness—simulation of the human brain via software and computer technology—becomes part of our daily lives.” by Martine Rothblatt Ph.D., MBA, J.D.
IEET Fellow David Eagleman has written and will host a six hour television series on The Brain for PBS. The series will premeire in 2015, and deals with tough questions of ethics and emerging neurotechnologies.
The growing body of work in the new field of “affective robotics” involves both theoretical and practical ways to instill – or at least imitate – human emotion in Artificial Intelligence (AI), and also to induce emotions toward AI in humans. The aim of this is to guarantee that as AI becomes smarter and more powerful, it will remain tractable and attractive to us. Inducing emotions is important to this effort to create safer and more attractive AI because it is hoped that instantiation of emotions will eventually lead to robots that have moral and ethical codes, making them safer; and also that humans and AI will be able to develop mutual emotional attachments, facilitating the use of robots as human companions and helpers. This paper discusses some of the more significant of these recent efforts and addresses some important ethical questions that arise relative to these endeavors.
I’ve recently been looking into the ethics of vegetarianism, partly because I’m not one myself and I’m interesting in questioning my position, and partly because it is an interesting philosophical issue in its own right. Earlier this summer I looked at Jeff McMahan’s critique of benign carnivorism. Since that piece was critical of the view I myself hold, I thought it might be worthwhile balancing things out by looking at an opposing view.
This is the sixth part in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. The series is covering those parts of the book that most interest me. This includes the sections setting out the basic argument for thinking that the creation of superintelligent AI could threaten human existence, and the proposed methods for dealing with that threat.