The Institute for Ethics and Emerging Technologies (IEET) is committed to the idea that some non-human animals meet the criteria of legal personhood and thus are deserving of specific rights and protections.
Owing to advances in several fields, including the neurosciences, it is becoming increasingly obvious that the human species no longer can ignore the rights of non-human persons. A number of non-human animals, including the great apes, cetaceans (i.e. dolphins and whales), elephants, and parrots, exhibit characteristics and tendencies consistent with that of a person's traits like self-awareness, intentionality, creativity, symbolic communication, and many others. It is a moral and legal imperative that we now extend the protection of 'human rights' from our species to all beings with those characteristics.
The IEET, as a promoter of non-anthropocentric personhood ethics, defends the rights of non-human persons to live in liberty, free from undue confinement, slavery, torture, experimentation, and the threat of unnatural death. Further, the IEET defends the right of non-human persons to live freely in their natural habitats, and when that's not possible, to be given the best quality of life and welfare possible in captivity (such as sanctuaries).
Specifically, through the Rights of Non-Human Persons program, the IEET will strive to:
Investigate and refine definitions of personhood and those criteria sufficient for the recognition of non-human persons.
Facilitate and support further research in the neurosciences for the improved understanding and identification of those cognitive processes, functions and behaviors that give rise to personhood.
Educate and persuade the public on the matter, spread the word, and increase awareness of the idea that some animals are persons.
Produce evidence and fact-based argumentation in favor of non-human animal personhood to support the cause and other like-minded groups and individuals.
Why do people torture others? Why do they march others into gas chambers? Because some are psychopaths or sadists or power hungry. Depravity is in their DNA. Some are not inherently depraved but believe the situation demands torture. If others are evil and we are good, then we should kill and torture them with impunity. Such ideas result from the demonization of others, from a simplistic worldview in which good battles evil. If others torture, they are war criminals; if we torture are motives are pure. But the world is more nuanced than this. There is good and evil within us all.
The question that motivates this essay is “Can we build a benevolent AI, and how do we get around the problem that humans, bless their cotton socks, can’t define ‘benevolence’?” A lot of people want to emphasize just how many different definitions of “benevolence” there are in the world — the point, of course, being that humans are very far from agreeing a universal definition of benevolence, so how can we expect to program something we cannot define into an AI?
My son recently shared an interesting idea. Suppose we cryogenically preserve ourselves and send our bodies and brains into space, or simply leave them on earth to be reanimated. Even if advanced beings find us in the future and want to awaken us, there is a good chance that our minds will be too primitive to be rebooted. Our futuristic descendants may not have technology compatible with our primitive mind files. It would be as if we come across an old floppy disk or early telephone but no longer had the technology to run them.
If predictions by future thinkers such as Aubrey de Grey, Robert Freitas, and Ray Kurzweil ring true – that future science will one day eliminate the disease of aging – then it makes sense to consider the repercussions a non-aging society might place on our world.
Police body cameras are all the rage lately. Al Sharpton wants them used to monitor the activities of cops. Ann Coulter wants them used to “shut down” Al Sharpton. The White House wants them because, well, they’re a way to look both “tough on police violence” and “tough on crime” by spending $263 million on new law enforcement technology.
Why do we punish others? There are many philosophical answers to that question. Some claim that we punish in order to incapacitate a potential wrongdoer; some claim that we do it in order to rehabilitate an offender; some claim that we do it in order to deter others; and some claim that we do it because wrongdoers simply deserve to be punished. Proponents of the last of these views are called retributivists. They believe that punishment is an intrinsic good, and that it ought to be imposed in order to ensure that justice is done. Proponents of the other views are consequentialists. They think that punishment is an instrumental good, and that its worth has to be assessed in terms of the ends it helps us to achieve.
There are several reasons why creating a superintelligent mind could bring about an existential catastrophe. For example, the AI could be malicious, or unfriendly, a scenario that I call the amity-enmity problem. It looms large in Nick Bostrom’s recent book Superintelligence, in which Bostrom suggests that we should recognize "doom" as the "default outcome" of creating a superintelligence. And AI could also be apathetic about our well-being and continued survival. Perhaps it wants to convert the entire surface of earth into solar panels (an example that Bostrom mentions), and as a result it annihilates the biosphere. Let’s call this the indifference problem.
What responsibility do we have for the things we make? At its root, this is a fairly straightforward science story. Neuroscience researchers at the University of Rochester and the University of Copenhagen successfully transplanted human glial progenitor cells (hGPCs) into a newborn mouse (here's the technical article in The Journal of Neuroscience, and the lay-friendly version in New Scientist). While glial cells are generally considered a support cell in the brain, positioning, feeding, insulating, and protecting neurons, they also help neurons make synaptic connections.
A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge want they saw as a flawed narrative surrounding recent advances in artificial intelligence.
Will robots pose exceptional challenges for the law? That’s the question taken up in Ryan Calo’s recent article “Robotics and the Lessons of Cyberlaw”. As noted in the previous entry, Calo thinks that robots have three distinguishing features: (i) embodiment (i.e. they are mechanical agents operating in the real world); (ii) emergence (i.e. they don’t simply perform routine operations, but are programmed to acquire and develop new behaviours); and (iii) social meaning (i.e. we anthropomorphise and attach social meaning to them). So when Calo asks whether robots pose exceptional challenges for the legal system, he asks in light of those three distinguishing features.
Hayles has written a complex and erudite book on the hidden premises and visible consequences of the information age. Ultimately, her thesis is summarized by a sentence in the prologue: “thought is a much broader cognitive function depending for its specificities on the embodied form enacting it”. Rewritten in plain English, it means that you cannot separate your “i” from the body that you inhabit. Her nightmare is “a culture inhabited by posthumans who regard their bodies as fashion accessories rather than the ground of being”. Her dream is a society in which we “understand ourselves as embodied creatures living within and through embodied worlds and embodied words.”
The US neurophysiologist Paul Nunez previously wrote “Electric Fields of the Brain” (1981) and “Neocortical Dynamics and Human EEG Rhythms” (1995), and in fact his credentials in the field of brain studies harken back to a paper originally written in 1972 and ambitiously titled “The Brain Wave Equation” (an equation that eventually he resurrects in this book, 40 years later). In this book Nunez summarizes his novel ideas on the way that “brains cause minds” (to use Searle’s expression).
Whatever a transhuman is, xe (a pronoun to encompass all conceivable states of personhood) will have to live in a world that enables xer to be transhuman. I’ll explore the impact of three likely-seeming aspects of that world: ubiquitous interconnected smart machines, continuous classification, and virtualism.
With futurist thinkers supporting the notion of human upgrading through technological enhancement, what parameters are considered in respect to moral enhancement? What cross cultural barriers and variations in moral reasoning are we targeting for such upgrades? Moreover, is moral enhancement simply a term we fear delving into despite the association it arguably has to almost everything our culture produces?
The “Singularity” seems to have become a new lucrative field for the struggling publishing industry (and, i am sure, soon, for the equally struggling Hollywood movie studios). To write a bestseller, you have to begin by warning that machines more intelligent than humans are coming soon. That is enough to get everybody’s attention.
In a powerful article at the Atlantic, “Why I Hope to Die at 75,” Dr. Ezekiel Emanuel lined up facts and figures showing that much of the recent gain in human lifespan is about stretching out the process of decline and death rather than living well for longer. Most of us would love to live to 100 and beyond with our minds sharp and our senses clear, able to take pleasure in the world around us while contributing at least modestly to the happiness and wellbeing of others. But clear-eyed analysis shows that is not how most elderly Americans experience their final years.
I am a transhumanist, and I believe that politics is important. Let me unpack that a little: I believe that we can and should voluntarily improve the human condition using technology. That makes me a transhumanist, but aside from that single axiom I have in common with all transhumanists, we’re an increasingly diverse bunch.
Jeremy Bentham’s panopticon is the classic symbol of authoritarianism. Bentham, a revolutionary philosopher and social theorist, adapted the idea from his brother Samuel. The panopticon was a design for a prison. It would be a single watchtower, surrounded by a circumference of cells. From the watchtower a guard could surveil every prisoner, whilst at the same time being concealed from their view. The guard could be on duty or not.