I am a Cyborg. No, I don’t have any technological enhancements just yet, though I plan on doing so very soon with help from my friends within the DIY grinder community. Even then, my “choosing” to identify myself as a cyborg is more than a mere desire for cyborg enhancements, but is an identity that I feel deeply within myself – a longing to express myself in ways that my current biological body cannot.
In the fall of 2014, a young dying woman, Brittany Maynard, captured the hearts of millions around the world. Now her husband and mother have teamed up with a national advocacy group, Compassion & Choices to honor her final wish—that aid in dying be available to terminally ill Americans in every state.
I’ve met Erik Parens twice; he seems like a thoroughly nice fellow. I say this because I’ve just been reading his latest book Shaping Our Selves: On Technology, Flourishing and a Habit of Thinking, and it is noticeable how much of his personality shines through in the book. Indeed, the book opens with a revealing memoir of Parens’s personal life and experiences in bioethics, specifically in the enhancement debate. What’s more, Parens’s frustrations with the limiting and binary nature of much philosophical debate is apparent throughout his book.
The term “libertarianism” is used in two senses in philosophical circles. The first, and perhaps more famous sense, is as a name for a family of political theories that prioritise individual freedom; the second, and perhaps less famous (except among the cognoscenti), is as a specific view on the nature of free will. It is the latter sense that concerns me in this post.
So I have another paper coming out. It’s about plea-bargaining, brain-based lie detection and the innocence problem. I wasn’t going to write about it on the blog, but then somebody sent me a link to a recent article by Jed Radoff entitled “Why Innocent People Plead Guilty”. Radoff’s article is an indictment of the plea-bargaining system currently in operation in the US. Since my article touches upon same thing, I thought it might be worth offering a summary of its core argument.
Overview of Advances Articulated in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013)  This article provides an overview of the research findings related to cognitive enhancement that are presented in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013), an encyclopedic textbook chronicling a plethora of recent advances in myriad areas of nanotechnology and nanomedicine. The final chapter discusses progress in nanomedical cognitive enhancement, where we find ourselves in a modern era in which many technologies appear to be on the cusp – helping to resolve pathologies while also having much future potential for the augmentation of human capabilities.
Why do people torture others? Why do they march others into gas chambers? Because some are psychopaths or sadists or power hungry. Depravity is in their DNA. Some are not inherently depraved but believe the situation demands torture. If others are evil and we are good, then we should kill and torture them with impunity. Such ideas result from the demonization of others, from a simplistic worldview in which good battles evil. If others torture, they are war criminals; if we torture are motives are pure. But the world is more nuanced than this. There is good and evil within us all.
The question that motivates this essay is “Can we build a benevolent AI, and how do we get around the problem that humans, bless their cotton socks, can’t define ‘benevolence’?” A lot of people want to emphasize just how many different definitions of “benevolence” there are in the world — the point, of course, being that humans are very far from agreeing a universal definition of benevolence, so how can we expect to program something we cannot define into an AI?
My son recently shared an interesting idea. Suppose we cryogenically preserve ourselves and send our bodies and brains into space, or simply leave them on earth to be reanimated. Even if advanced beings find us in the future and want to awaken us, there is a good chance that our minds will be too primitive to be rebooted. Our futuristic descendants may not have technology compatible with our primitive mind files. It would be as if we come across an old floppy disk or early telephone but no longer had the technology to run them.
If predictions by future thinkers such as Aubrey de Grey, Robert Freitas, and Ray Kurzweil ring true – that future science will one day eliminate the disease of aging – then it makes sense to consider the repercussions a non-aging society might place on our world.
Police body cameras are all the rage lately. Al Sharpton wants them used to monitor the activities of cops. Ann Coulter wants them used to “shut down” Al Sharpton. The White House wants them because, well, they’re a way to look both “tough on police violence” and “tough on crime” by spending $263 million on new law enforcement technology.
Why do we punish others? There are many philosophical answers to that question. Some claim that we punish in order to incapacitate a potential wrongdoer; some claim that we do it in order to rehabilitate an offender; some claim that we do it in order to deter others; and some claim that we do it because wrongdoers simply deserve to be punished. Proponents of the last of these views are called retributivists. They believe that punishment is an intrinsic good, and that it ought to be imposed in order to ensure that justice is done. Proponents of the other views are consequentialists. They think that punishment is an instrumental good, and that its worth has to be assessed in terms of the ends it helps us to achieve.
There are several reasons why creating a superintelligent mind could bring about an existential catastrophe. For example, the AI could be malicious, or unfriendly, a scenario that I call the amity-enmity problem. It looms large in Nick Bostrom’s recent book Superintelligence, in which Bostrom suggests that we should recognize "doom" as the "default outcome" of creating a superintelligence. And AI could also be apathetic about our well-being and continued survival. Perhaps it wants to convert the entire surface of earth into solar panels (an example that Bostrom mentions), and as a result it annihilates the biosphere. Let’s call this the indifference problem.
What responsibility do we have for the things we make? At its root, this is a fairly straightforward science story. Neuroscience researchers at the University of Rochester and the University of Copenhagen successfully transplanted human glial progenitor cells (hGPCs) into a newborn mouse (here's the technical article in The Journal of Neuroscience, and the lay-friendly version in New Scientist). While glial cells are generally considered a support cell in the brain, positioning, feeding, insulating, and protecting neurons, they also help neurons make synaptic connections.
A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge want they saw as a flawed narrative surrounding recent advances in artificial intelligence.
Will robots pose exceptional challenges for the law? That’s the question taken up in Ryan Calo’s recent article “Robotics and the Lessons of Cyberlaw”. As noted in the previous entry, Calo thinks that robots have three distinguishing features: (i) embodiment (i.e. they are mechanical agents operating in the real world); (ii) emergence (i.e. they don’t simply perform routine operations, but are programmed to acquire and develop new behaviours); and (iii) social meaning (i.e. we anthropomorphise and attach social meaning to them). So when Calo asks whether robots pose exceptional challenges for the legal system, he asks in light of those three distinguishing features.
Hayles has written a complex and erudite book on the hidden premises and visible consequences of the information age. Ultimately, her thesis is summarized by a sentence in the prologue: “thought is a much broader cognitive function depending for its specificities on the embodied form enacting it”. Rewritten in plain English, it means that you cannot separate your “i” from the body that you inhabit. Her nightmare is “a culture inhabited by posthumans who regard their bodies as fashion accessories rather than the ground of being”. Her dream is a society in which we “understand ourselves as embodied creatures living within and through embodied worlds and embodied words.”
The US neurophysiologist Paul Nunez previously wrote “Electric Fields of the Brain” (1981) and “Neocortical Dynamics and Human EEG Rhythms” (1995), and in fact his credentials in the field of brain studies harken back to a paper originally written in 1972 and ambitiously titled “The Brain Wave Equation” (an equation that eventually he resurrects in this book, 40 years later). In this book Nunez summarizes his novel ideas on the way that “brains cause minds” (to use Searle’s expression).
Whatever a transhuman is, xe (a pronoun to encompass all conceivable states of personhood) will have to live in a world that enables xer to be transhuman. I’ll explore the impact of three likely-seeming aspects of that world: ubiquitous interconnected smart machines, continuous classification, and virtualism.
With futurist thinkers supporting the notion of human upgrading through technological enhancement, what parameters are considered in respect to moral enhancement? What cross cultural barriers and variations in moral reasoning are we targeting for such upgrades? Moreover, is moral enhancement simply a term we fear delving into despite the association it arguably has to almost everything our culture produces?
The “Singularity” seems to have become a new lucrative field for the struggling publishing industry (and, i am sure, soon, for the equally struggling Hollywood movie studios). To write a bestseller, you have to begin by warning that machines more intelligent than humans are coming soon. That is enough to get everybody’s attention.
In a powerful article at the Atlantic, “Why I Hope to Die at 75,” Dr. Ezekiel Emanuel lined up facts and figures showing that much of the recent gain in human lifespan is about stretching out the process of decline and death rather than living well for longer. Most of us would love to live to 100 and beyond with our minds sharp and our senses clear, able to take pleasure in the world around us while contributing at least modestly to the happiness and wellbeing of others. But clear-eyed analysis shows that is not how most elderly Americans experience their final years.
I am a transhumanist, and I believe that politics is important. Let me unpack that a little: I believe that we can and should voluntarily improve the human condition using technology. That makes me a transhumanist, but aside from that single axiom I have in common with all transhumanists, we’re an increasingly diverse bunch.
Jeremy Bentham’s panopticon is the classic symbol of authoritarianism. Bentham, a revolutionary philosopher and social theorist, adapted the idea from his brother Samuel. The panopticon was a design for a prison. It would be a single watchtower, surrounded by a circumference of cells. From the watchtower a guard could surveil every prisoner, whilst at the same time being concealed from their view. The guard could be on duty or not.
This time, let’s veer into an area wherein I actually know a thing or two! The matter of whether humanity might someday… or even should… meddle in other creatures on this planet and bestow upon them the debatable “gift” of full sapience—the ability to argue, ponder, store information, appraise, discuss, create, express and manipulate tools, so that they might join us in the problematic task of being worthy planetary managers.
This is the second and final part of my series about a recent exchange between David Chalmers and Massimo Pigliucci. The exchange took place in the pages of Intelligence Unbound, an edited collection of essays about mind-uploading and artificial intelligence. It concerned the philosophical plausibility of mind-uploading.
As we head deeper into the 21st century, we’re starting to catch a glimpse of the fantastic technological possibilities that await. But we’re also starting to get a grim sense of the potential horrors. Here are 10 frightening technologies that should never, ever, come into existence.