"You enter the wellness center and tell the receptionist avatar that you're here for an annual restoration, and though your real age is 110, you would like to be restored to the age of a 20-something. A nurse then injects billions of genome-specific 'bots non-invasively through the skin; you're now set for another year."
I recently published an unusual article. At least, I think it is unusual. It imagines a future in which sophisticated sex robots are used to replicate acts of rape and child sexual abuse, and then asks whether such acts should be criminalised. In the article, I try to provide a framework for evaluating the issue, but I do so in what I think is a provocative fashion. I present an argument for thinking that such acts should be criminalised, even if they have no extrinsically harmful effects on others. I know the argument is going to be unpalatable to some, and I myself balk at its seemingly anti-liberal/anti-libertarian dimensions, but I thought it was sufficiently interesting to be worth spelling out in some detail. Hence why I wrote the article.
We asked “Should DIY biohackers be subject to the same safety regulations and oversight as corporate biological research labs?” Of the 573 of you that responded six out of ten (61%) believed that biohackers should be subject to some kind of regulation.
Before you ask, yes, this is a post about risk. And no, I’m not talking about the dangers of immortalizing the star of Terminator Genisys‘ real-life biological brain. But to begin somewhere near the beginning.
The question that motivates this essay is “Can we build a benevolent AI, and how do we get around the problem that humans, bless their cotton socks, can’t define ‘benevolence’?” A lot of people want to emphasize just how many different definitions of “benevolence” there are in the world — the point, of course, being that humans are very far from agreeing a universal definition of benevolence, so how can we expect to program something we cannot define into an AI?
French philosophers Bergson and Deleuze bring to nanocognition and machine ethics interfaces the philosophical conceptualizations of image, movement, time, perception, memory, and reality that can be considered for implementation in tools for both cognitive enhancement and subjectivation (the greater actualization of human potential).
Some people think that neuroscience will have a significant impact on the law. Some people are more sceptical. A recent book by Michael Pardo and Dennis Patterson — Minds, Brains and Law: The Conceptual Foundations of Law and Neuroscience — belongs to the sceptical camp. In the book, Pardo and Patterson make a passionate plea for conceptual clarity when it comes to the interpretation of neuroscientific evidence and its potential application in the law. They suggest that most neurolaw hype stems from conceptual confusion. They want to throw some philosophical cold water on the proponents of this hype.
Many scientists believe that we will soon be able to preserve our consciousness indefinitely. There are a number of scenarios by which this might be accomplished, but so-called mind uploading is one of the most prominent. Mind uploading refers to a hypothetical process of copying the contents of a consciousness from a brain to a computational device. This could be done by copying and transferring these contents into a computer, or by piecemeal replacement with parts of the brain gradually replaced by hardware. Either way consciousness would no longer be running on a biological brain.
I will attempt to take the fear out of the future, by giving Transhumanism a digestible definition, while at the same time offering a cautionary note. As an educator, technologist and ethicist, I feel I have a social obligation to provide a rationale for understanding Transhumanism for those people who have questions about our natural evolution and for younger generations who are embracing technology but want to know there is a brighter future.
Police body cameras are all the rage lately. Al Sharpton wants them used to monitor the activities of cops. Ann Coulter wants them used to “shut down” Al Sharpton. The White House wants them because, well, they’re a way to look both “tough on police violence” and “tough on crime” by spending $263 million on new law enforcement technology.
Welcome to part 1 of the Your Mileage May Vary series of blog posts. The point of this series is to clearly and briefly state my personal view on matters which come up repeatedly, to save having to say the same things again and again. Although these are my own [Dr M. Amon Twyman's] views rather than the official position of any organisation (except where stated otherwise), no-one should be surprised when my own views coincide with those of organisations where I hold any position.
A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge want they saw as a flawed narrative surrounding recent advances in artificial intelligence.
Will robots pose exceptional challenges for the law? That’s the question taken up in Ryan Calo’s recent article “Robotics and the Lessons of Cyberlaw”. As noted in the previous entry, Calo thinks that robots have three distinguishing features: (i) embodiment (i.e. they are mechanical agents operating in the real world); (ii) emergence (i.e. they don’t simply perform routine operations, but are programmed to acquire and develop new behaviours); and (iii) social meaning (i.e. we anthropomorphise and attach social meaning to them). So when Calo asks whether robots pose exceptional challenges for the legal system, he asks in light of those three distinguishing features.
Here at the Transvision 2014 in Paris we just concluded a meeting of the technoprogressive caucus to draft a statement of common principles. The meeting consisted of the members of Technoprog!: AFT, Amon Twyman representing Zero State/Institute for Social Futurism, David Wood from the London Futurists, and me (J. Hughes) from IEET. The result is below. We are inviting individual and organizational co-signators. Please let me know if you would like to add your or your organization’s name. We would like to collect co-signators between now and the end of the year, so you don’t have to decide immediately.
Four years ago I posted Professor Robert Winston’s “Scientist’s Manifesto” on 2020 Science. Having just gone back and read this, it still resonate deeply with me – so I’m reposting it in the hope that it will also resonate with others…
Whatever a transhuman is, xe (a pronoun to encompass all conceivable states of personhood) will have to live in a world that enables xer to be transhuman. I’ll explore the impact of three likely-seeming aspects of that world: ubiquitous interconnected smart machines, continuous classification, and virtualism.
Digital technology is instead progressing very slowly when it comes to government: the link between the citizen and the politician is often just a “feedback form” on the politician’s website. Very little effort has been made to link the citizen and the decision making process in more effective and creative ways.
With a 3-D printer, an operator plugs in a virtual blueprint for an object, which the printer uses to construct the final product layer by layer. Several types of these printers exist, using a variety of materials as the “ink.” The most popular models work by extruding a filament of molten plastic. The print head makes repeated passes over the item being printed. It thus builds a 3-D structure.
Back in the early 19th century a novel was written that tells the story of humanity’s downfall in the 21st century. Our undoing was the consequence of a disease that originates in the developing world and radiates outward eventually spreading into North America, East Asia, and ultimately Europe. The disease proves unstoppable causing the collapse of civilization, our greatest cities becoming grave sites of ruin. For all the reader is left to know, not one human being survives the pandemic.
What kind of emotional reactions do you have to robots? Until not very long ago, this question was the stuff of science fiction. But the recent proliferation of robots in the home, workplace and healthcare world, bring the question squarely into everyday life. As a psychologist interested in exploring human-robot interaction, I’ve coined the term RoboPsych as an umbrella for our cognitive, emotional and behavioral reactions to the wide range of robots in our daily lives.
On November 1, 29-year-old Brittany Maynard took medication to end her life. This wasn’t an act of cowardice, nor due to some psychological condition. She ended her life because she wanted to die on her own terms, rather than suffer the eventually-fatal torment of terminal brain cancer. Her ability to legally commit suicide – or what she referred to it as “death with dignity” – was due to the state of Oregon’s “Death With Dignity Act.”
We knew the risks. But last year, after my wife and I had our genomes sequenced, what we learned was still alarming. Amongst my wife’s results was a genetic variant associated with a significantly increased risk of Parkinson’s disease. And the matter-of-fact statistic on risk came with little information on how to reduce it.
People have for some time speculated about the possibility that we’re living inside a computer simulation. But the 2003 publication of Nick Bostrom’s “Are You Living In a Computer Simulation?” brought a new level of sophistication to the topic. Bostrom’s argument is that one (or more) of the following disjuncts is true: (i) our species will go extinct before reaching an advanced posthuman stage; (ii) our species will reach a posthuman stage but decide not, for whatever reasons, to run a large number of simulations; or (iii) we are almost certainly in a simulation.
Roughly (I’ll refine later on) the “technological singularity” (or “singularity” for short, and in the right context) is the name given to point in time at which greater-than-human superintelligent machines are created. The concept (and name) was popularised by the science fiction author Vernor Vinge in the 1980s and 90s, though its roots can be traced further back in time to the work of John Von Neumann and I.J. Good.
Advances in robotics and artificial intelligence are going to play an increasingly important role in human society. Over the past two years, I’ve written several posts about this topic. The majority of them focus on machine ethics and the potential risks of an intelligence explosion; others look at how we might interact with and have duties toward robots.
Robots are poised to eliminate millions of jobs over the coming decades. We have to address the coming epidemic of “technological unemployment” if we’re to avoid crippling levels of poverty and societal collapse. Here’s how a guaranteed basic income will help — and why it’s absolutely inevitable.
The onset of transhumanism, political or not may rally many people against technological innovations such as the integration of the human species with computers and re-designing of our specie’s DNA for enhancement purposes. The people of the world need to cooperate and value education so that we never see any of the dystopian posthumanist scenarios play out the way many think they might.