I’m trying to wrap my head around the extended mind hypothesis (EMH). I’m doing so because I’m interested in its implications for the debate about enhancement and technology. If the mind extends into the environment outside the brain/bone barrier, then we are arguably enhancing our minds all the time by developing new technologies, be they books and abacuses or smartphones and wearable tech. Consequently, we should have no serious principled objection to technologies that try to enhance directly inside the brain/bone barrier.
Opposition to IUD’s, like opposition to vaccines, is putting American families at risk—and a Colorado controversy shows that misguided faith and scientific ignorance are to blame. When a pilot program in Colorado offered teens state-of-the-art long acting contraceptives—IUD’s and implants—teen births plummeted by 40%, along with a drop in abortions. The program saved the state 42.5 million dollars in a single year, over five times what it cost. But rather than extending or expanding the program, some Colorado Republicans are trying to kill it—even if this stacks the odds against Colorado families.
Will superintelligences be troubled by philosophical conundrums?1 Consider classic philosophical questions such as: 1) What is real? 2) What is valuable? 3) Are we free? We currently don’t know the answer to such questions. We might not think much about them, or we may accept common answers—this world is real; happiness is valuable; we are free.
How’s this for a 21st century Valentine’s Day tale: a group of religious fundamentalists want to redefine human sexual and gender relationships based on a more than 2,000 year old religious text. Yet instead of doing this by aiming to seize hold of the cultural and political institutions of society, a task they find impossible, they create an algorithm which once people enter their experience is based on religiously derived assumptions users cannot see. People who enter this world have no control over their actions within it, and surrender their autonomy for the promise of finding their “soul mate”.
Taken as a package, the Bible sends mixed messages about slavery, which is why Christian leaders used the Good Book on both sides—including in the lead up to the American civil war. Should a person be able to own another person? Today Christians uniformly say no, and many would like to believe that has always been the case. But history tells a different story, one in which Christians have struggled to give a clear answer when confronted with questions about human trafficking and human rights.
I am a Cyborg. No, I don’t have any technological enhancements just yet, though I plan on doing so very soon with help from my friends within the DIY grinder community. Even then, my “choosing” to identify myself as a cyborg is more than a mere desire for cyborg enhancements, but is an identity that I feel deeply within myself – a longing to express myself in ways that my current biological body cannot.
In the fall of 2014, a young dying woman, Brittany Maynard, captured the hearts of millions around the world. Now her husband and mother have teamed up with a national advocacy group, Compassion & Choices to honor her final wish—that aid in dying be available to terminally ill Americans in every state.
I’ve met Erik Parens twice; he seems like a thoroughly nice fellow. I say this because I’ve just been reading his latest book Shaping Our Selves: On Technology, Flourishing and a Habit of Thinking, and it is noticeable how much of his personality shines through in the book. Indeed, the book opens with a revealing memoir of Parens’s personal life and experiences in bioethics, specifically in the enhancement debate. What’s more, Parens’s frustrations with the limiting and binary nature of much philosophical debate is apparent throughout his book.
The term “libertarianism” is used in two senses in philosophical circles. The first, and perhaps more famous sense, is as a name for a family of political theories that prioritise individual freedom; the second, and perhaps less famous (except among the cognoscenti), is as a specific view on the nature of free will. It is the latter sense that concerns me in this post.
So I have another paper coming out. It’s about plea-bargaining, brain-based lie detection and the innocence problem. I wasn’t going to write about it on the blog, but then somebody sent me a link to a recent article by Jed Radoff entitled “Why Innocent People Plead Guilty”. Radoff’s article is an indictment of the plea-bargaining system currently in operation in the US. Since my article touches upon same thing, I thought it might be worth offering a summary of its core argument.
Overview of Advances Articulated in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013)  This article provides an overview of the research findings related to cognitive enhancement that are presented in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013), an encyclopedic textbook chronicling a plethora of recent advances in myriad areas of nanotechnology and nanomedicine. The final chapter discusses progress in nanomedical cognitive enhancement, where we find ourselves in a modern era in which many technologies appear to be on the cusp – helping to resolve pathologies while also having much future potential for the augmentation of human capabilities.
Why do people torture others? Why do they march others into gas chambers? Because some are psychopaths or sadists or power hungry. Depravity is in their DNA. Some are not inherently depraved but believe the situation demands torture. If others are evil and we are good, then we should kill and torture them with impunity. Such ideas result from the demonization of others, from a simplistic worldview in which good battles evil. If others torture, they are war criminals; if we torture are motives are pure. But the world is more nuanced than this. There is good and evil within us all.
The question that motivates this essay is “Can we build a benevolent AI, and how do we get around the problem that humans, bless their cotton socks, can’t define ‘benevolence’?” A lot of people want to emphasize just how many different definitions of “benevolence” there are in the world — the point, of course, being that humans are very far from agreeing a universal definition of benevolence, so how can we expect to program something we cannot define into an AI?
My son recently shared an interesting idea. Suppose we cryogenically preserve ourselves and send our bodies and brains into space, or simply leave them on earth to be reanimated. Even if advanced beings find us in the future and want to awaken us, there is a good chance that our minds will be too primitive to be rebooted. Our futuristic descendants may not have technology compatible with our primitive mind files. It would be as if we come across an old floppy disk or early telephone but no longer had the technology to run them.
If predictions by future thinkers such as Aubrey de Grey, Robert Freitas, and Ray Kurzweil ring true – that future science will one day eliminate the disease of aging – then it makes sense to consider the repercussions a non-aging society might place on our world.
Police body cameras are all the rage lately. Al Sharpton wants them used to monitor the activities of cops. Ann Coulter wants them used to “shut down” Al Sharpton. The White House wants them because, well, they’re a way to look both “tough on police violence” and “tough on crime” by spending $263 million on new law enforcement technology.
Why do we punish others? There are many philosophical answers to that question. Some claim that we punish in order to incapacitate a potential wrongdoer; some claim that we do it in order to rehabilitate an offender; some claim that we do it in order to deter others; and some claim that we do it because wrongdoers simply deserve to be punished. Proponents of the last of these views are called retributivists. They believe that punishment is an intrinsic good, and that it ought to be imposed in order to ensure that justice is done. Proponents of the other views are consequentialists. They think that punishment is an instrumental good, and that its worth has to be assessed in terms of the ends it helps us to achieve.
There are several reasons why creating a superintelligent mind could bring about an existential catastrophe. For example, the AI could be malicious, or unfriendly, a scenario that I call the amity-enmity problem. It looms large in Nick Bostrom’s recent book Superintelligence, in which Bostrom suggests that we should recognize "doom" as the "default outcome" of creating a superintelligence. And AI could also be apathetic about our well-being and continued survival. Perhaps it wants to convert the entire surface of earth into solar panels (an example that Bostrom mentions), and as a result it annihilates the biosphere. Let’s call this the indifference problem.
What responsibility do we have for the things we make? At its root, this is a fairly straightforward science story. Neuroscience researchers at the University of Rochester and the University of Copenhagen successfully transplanted human glial progenitor cells (hGPCs) into a newborn mouse (here's the technical article in The Journal of Neuroscience, and the lay-friendly version in New Scientist). While glial cells are generally considered a support cell in the brain, positioning, feeding, insulating, and protecting neurons, they also help neurons make synaptic connections.
A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge want they saw as a flawed narrative surrounding recent advances in artificial intelligence.
Will robots pose exceptional challenges for the law? That’s the question taken up in Ryan Calo’s recent article “Robotics and the Lessons of Cyberlaw”. As noted in the previous entry, Calo thinks that robots have three distinguishing features: (i) embodiment (i.e. they are mechanical agents operating in the real world); (ii) emergence (i.e. they don’t simply perform routine operations, but are programmed to acquire and develop new behaviours); and (iii) social meaning (i.e. we anthropomorphise and attach social meaning to them). So when Calo asks whether robots pose exceptional challenges for the legal system, he asks in light of those three distinguishing features.
Hayles has written a complex and erudite book on the hidden premises and visible consequences of the information age. Ultimately, her thesis is summarized by a sentence in the prologue: “thought is a much broader cognitive function depending for its specificities on the embodied form enacting it”. Rewritten in plain English, it means that you cannot separate your “i” from the body that you inhabit. Her nightmare is “a culture inhabited by posthumans who regard their bodies as fashion accessories rather than the ground of being”. Her dream is a society in which we “understand ourselves as embodied creatures living within and through embodied worlds and embodied words.”
The US neurophysiologist Paul Nunez previously wrote “Electric Fields of the Brain” (1981) and “Neocortical Dynamics and Human EEG Rhythms” (1995), and in fact his credentials in the field of brain studies harken back to a paper originally written in 1972 and ambitiously titled “The Brain Wave Equation” (an equation that eventually he resurrects in this book, 40 years later). In this book Nunez summarizes his novel ideas on the way that “brains cause minds” (to use Searle’s expression).
Whatever a transhuman is, xe (a pronoun to encompass all conceivable states of personhood) will have to live in a world that enables xer to be transhuman. I’ll explore the impact of three likely-seeming aspects of that world: ubiquitous interconnected smart machines, continuous classification, and virtualism.
With futurist thinkers supporting the notion of human upgrading through technological enhancement, what parameters are considered in respect to moral enhancement? What cross cultural barriers and variations in moral reasoning are we targeting for such upgrades? Moreover, is moral enhancement simply a term we fear delving into despite the association it arguably has to almost everything our culture produces?
The “Singularity” seems to have become a new lucrative field for the struggling publishing industry (and, i am sure, soon, for the equally struggling Hollywood movie studios). To write a bestseller, you have to begin by warning that machines more intelligent than humans are coming soon. That is enough to get everybody’s attention.