The Institute for Ethics and Emerging Technologies (IEET) is committed to the idea that some non-human animals meet the criteria of legal personhood and thus are deserving of specific rights and protections.
Owing to advances in several fields, including the neurosciences, it is becoming increasingly obvious that the human species no longer can ignore the rights of non-human persons. A number of non-human animals, including the great apes, cetaceans (i.e. dolphins and whales), elephants, and parrots, exhibit characteristics and tendencies consistent with that of a person's traits like self-awareness, intentionality, creativity, symbolic communication, and many others. It is a moral and legal imperative that we now extend the protection of 'human rights' from our species to all beings with those characteristics.
The IEET, as a promoter of non-anthropocentric personhood ethics, defends the rights of non-human persons to live in liberty, free from undue confinement, slavery, torture, experimentation, and the threat of unnatural death. Further, the IEET defends the right of non-human persons to live freely in their natural habitats, and when that's not possible, to be given the best quality of life and welfare possible in captivity (such as sanctuaries).
Specifically, through the Rights of Non-Human Persons program, the IEET will strive to:
Investigate and refine definitions of personhood and those criteria sufficient for the recognition of non-human persons.
Facilitate and support further research in the neurosciences for the improved understanding and identification of those cognitive processes, functions and behaviors that give rise to personhood.
Educate and persuade the public on the matter, spread the word, and increase awareness of the idea that some animals are persons.
Produce evidence and fact-based argumentation in favor of non-human animal personhood to support the cause and other like-minded groups and individuals.
With futurist thinkers supporting the notion of human upgrading through technological enhancement, what parameters are considered in respect to moral enhancement? What cross cultural barriers and variations in moral reasoning are we targeting for such upgrades? Moreover, is moral enhancement simply a term we fear delving into despite the association it arguably has to almost everything our culture produces?
The “Singularity” seems to have become a new lucrative field for the struggling publishing industry (and, i am sure, soon, for the equally struggling Hollywood movie studios). To write a bestseller, you have to begin by warning that machines more intelligent than humans are coming soon. That is enough to get everybody’s attention.
In a powerful article at the Atlantic, “Why I Hope to Die at 75,” Dr. Ezekiel Emanuel lined up facts and figures showing that much of the recent gain in human lifespan is about stretching out the process of decline and death rather than living well for longer. Most of us would love to live to 100 and beyond with our minds sharp and our senses clear, able to take pleasure in the world around us while contributing at least modestly to the happiness and wellbeing of others. But clear-eyed analysis shows that is not how most elderly Americans experience their final years.
I am a transhumanist, and I believe that politics is important. Let me unpack that a little: I believe that we can and should voluntarily improve the human condition using technology. That makes me a transhumanist, but aside from that single axiom I have in common with all transhumanists, we’re an increasingly diverse bunch.
Jeremy Bentham’s panopticon is the classic symbol of authoritarianism. Bentham, a revolutionary philosopher and social theorist, adapted the idea from his brother Samuel. The panopticon was a design for a prison. It would be a single watchtower, surrounded by a circumference of cells. From the watchtower a guard could surveil every prisoner, whilst at the same time being concealed from their view. The guard could be on duty or not.
This time, let’s veer into an area wherein I actually know a thing or two! The matter of whether humanity might someday… or even should… meddle in other creatures on this planet and bestow upon them the debatable “gift” of full sapience—the ability to argue, ponder, store information, appraise, discuss, create, express and manipulate tools, so that they might join us in the problematic task of being worthy planetary managers.
This is the second and final part of my series about a recent exchange between David Chalmers and Massimo Pigliucci. The exchange took place in the pages of Intelligence Unbound, an edited collection of essays about mind-uploading and artificial intelligence. It concerned the philosophical plausibility of mind-uploading.
As we head deeper into the 21st century, we’re starting to catch a glimpse of the fantastic technological possibilities that await. But we’re also starting to get a grim sense of the potential horrors. Here are 10 frightening technologies that should never, ever, come into existence.
Ethicists have been asking themselves a question over the last couple of years that seems to come right out of science-fiction. Is it possible to make moral machines, or in their lingo, autonomous moral agents -AMAs? Asking the question might have seemed silly not so long ago, or so speculative as risk obtaining tenure, but as the revolution in robotics has rolled forward it has become an issue necessary to grapple with and now.
“Virtually Human explores what the not-too-distant future will look like when cyberconsciousness—simulation of the human brain via software and computer technology—becomes part of our daily lives.” by Martine Rothblatt Ph.D., MBA, J.D.
IEET Fellow David Eagleman has written and will host a six hour television series on The Brain for PBS. The series will premeire in 2015, and deals with tough questions of ethics and emerging neurotechnologies.
The growing body of work in the new field of “affective robotics” involves both theoretical and practical ways to instill – or at least imitate – human emotion in Artificial Intelligence (AI), and also to induce emotions toward AI in humans. The aim of this is to guarantee that as AI becomes smarter and more powerful, it will remain tractable and attractive to us. Inducing emotions is important to this effort to create safer and more attractive AI because it is hoped that instantiation of emotions will eventually lead to robots that have moral and ethical codes, making them safer; and also that humans and AI will be able to develop mutual emotional attachments, facilitating the use of robots as human companions and helpers. This paper discusses some of the more significant of these recent efforts and addresses some important ethical questions that arise relative to these endeavors.
I’ve recently been looking into the ethics of vegetarianism, partly because I’m not one myself and I’m interesting in questioning my position, and partly because it is an interesting philosophical issue in its own right. Earlier this summer I looked at Jeff McMahan’s critique of benign carnivorism. Since that piece was critical of the view I myself hold, I thought it might be worthwhile balancing things out by looking at an opposing view.
This is the sixth part in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. The series is covering those parts of the book that most interest me. This includes the sections setting out the basic argument for thinking that the creation of superintelligent AI could threaten human existence, and the proposed methods for dealing with that threat.
Should animals be permitted to hunt and kill other animals? Some futurists believe that humans should intervene, and solve the “problem” of predator vs. prey once and for all. We talked to the man who wants to use radical ecoengineering to put an end to the carnage. A world without predators certainly sounds extreme, and it is. But British philosopher David Pearce can’t imagine a future in which animals continue to be trapped in the never-ending cycle of blind Darwinian processes.
Transhumanists as a rule may prefer to contemplate implants and genetic engineering, but few if any violations of morphological freedom exceed being torn to pieces by shrapnel or dashed against concrete by an overpressure wave. In this piece I argue that the settler-colonial violence in occupied Palestine relates to core aspects of modernity and demands futurist attention both emotionally and intellectually.
More than 80 percent of teen pregnancies are accidents. A girl with other hopes and dreams—or maybe a girl who is floundering, who hasn’t even begun to explore her hopes and dreams—finds herself unexpectedly slated for either an abortion or 4,000 diapers. Given the shame and stigma surrounding abortion in many American subcultures, that can seem like a choice between the proverbial rock and hard place. The exciting news that launched this Sightline series is that teen pregnancy is in decline across the United States and across all major ethnic groups. Fewer and fewer young women are facing hard decisions after the fact.
In this entry, I take a look at Bostrom’s orthogonality thesis. As we shall see, this thesis is central to his claim that superintelligent AIs could pose profound existential risks to human beings. But what does the thesis mean and how plausible is it?
There may be as many as 80,000 American prisoners currently locked-up in a SHU, or segregated housing unit. Solitary confinement in a SHU can cause irreversible psychological effects in as little as 15 days. Here’s what social isolation does to your brain, and why it should be considered torture.