Many scientists believe that we will soon be able to preserve our consciousness indefinitely. There are a number of scenarios by which this might be accomplished, but so-called mind uploading is one of the most prominent. Mind uploading refers to a hypothetical process of copying the contents of a consciousness from a brain to a computational device. This could be done by copying and transferring these contents into a computer, or by piecemeal replacement with parts of the brain gradually replaced by hardware. Either way consciousness would no longer be running on a biological brain.
There are several reasons why creating a superintelligent mind could bring about an existential catastrophe. For example, the AI could be malicious, or unfriendly, a scenario that I call the amity-enmity problem. It looms large in Nick Bostrom’s recent book Superintelligence, in which Bostrom suggests that we should recognize "doom" as the "default outcome" of creating a superintelligence. And AI could also be apathetic about our well-being and continued survival. Perhaps it wants to convert the entire surface of earth into solar panels (an example that Bostrom mentions), and as a result it annihilates the biosphere. Let’s call this the indifference problem.
Whatever a transhuman is, xe (a pronoun to encompass all conceivable states of personhood) will have to live in a world that enables xer to be transhuman. I’ll explore the impact of three likely-seeming aspects of that world: ubiquitous interconnected smart machines, continuous classification, and virtualism.
I had the opportunity to see Wally Pfister’s Transcendence, with Johnny Depp, Rebecca Hall, and Morgan Freeman, only last week, more than three months after the film’s release in theaters. Before seeing the film I satisfied my Transcendence cravings with an old, still unnamed copy of Jack Paglen’s script that can be found online (it appears that Paglen’s screenplay was part of what is known as the Black List, a list of popular but unproduced screenplays in Hollywood).
Most transhumanists are already familiar with digitalism, even if they haven’t heard the name. Digitalism uses ideas from computer science to develop new ways of thinking about old topics. Writers like Ed Fredkin, Hans Moravec, Frank Tipler, Nick Bostrom, and Ray Kurzweil are digitalists. Typically, digitalists are scientists, rationalists, naturalists, and atheists. Nevertheless, they have worked out novel and deeply meaningful ways of thinking about things like ghosts, souls, gods, resurrection, and reincarnation.
There’s a pervasive notion that monogamous relationships are the end-all-be-all – the default pact in human couplings that keep the fabric of society from being torn apart. But growing numbers of scientists believe monogamy is not our biological default; and may not even represent the best road to happiness.
The Gamer’s Dilemma is the title of an article by Morgan Luck. We covered that article in part one. In brief, the article argues that there is something puzzling about attitudes toward virtual acts which, if they took place in the real world, would be immoral. To be precise, there is something puzzling about attitudes toward virtual murder and virtual paedophilia.
Building machines that process information the same way a brain does has been a dream for over 50 years. Artificial intelligence, fuzzy logic, and neural networks have all experienced some degrees of success, but machines still cannot recognize pictures or understand language as well as humans can.
Modern video games give players the opportunity to engage in highly realistic depictions of violent acts. Among these is the act of virtual murder: the player’s character intentional kills someone in the game environment without good cause. Most avid gamers don’t seem overly concerned about this (reputed links between video games and violence notwithstanding). Nevertheless, when the possibility of other immoral virtual acts — say virtual paedophilia — is raised, people become rather more squeamish. Why is this? And is this double-standard justified?
It seems as though every day we grow closer to creating fully conscious and emergent artificial intelligences. As I’ve written about before, this poses a problem for many religions, especially those that ascribe a special place for humanity and for human consciousness in the cosmos. Buddhism stands out as an exception. Buddhism may be the one system of religious thought that not only accepts but will actively embrace any AIs that we produce as a species.
So the other day Julia Galef and I had the pleasure of interviewing mathematical cosmologist Max Tegmark for the Rationally Speaking podcast. The episode will come out in late January, close to the release of Max’s book, presenting his Mathematical Universe Hypothesis (MUH). We had a lively and interesting conversation, but in the end, I’m not convinced (and I doubt Julia was either).
Who has time anymore to manage their social media feeds? All the status updating, replying, and posting of smart takes on the day’s news is exhausting. Well, Google want to help you out with that: The company recently submitted a patent for software that learns how users respond to social media posts and then automatically recommends updates and replies they can make for future ones. Consider it outsourcing, for your social life—an amped up, next gen blend of automated birthday reminders and computer generated, personalized remarks (more successful Turing Test than random word salad).
Human beings have long performed sexual acts with artifacts. Ancient religious rituals oftentimes involved the performance of sexual acts with statues, and down through the ages a vast array of devices for sexual stimulation and gratification have been created. Little wonder then that a perennial goal among roboticists and AI experts has been the creation of sex robots (“sexbots”): robots from whom we can receive sexual gratification, and with whom we may even be able achieve an emotional connection.
A recent UN State of the Future Report projects that by 2100, world population will total 9 billion, just 2 billion more than today. But the report did not account for radically increased life spans. Many forward thinkers, including this writer, believe that today’s biotech efforts with stem cell therapies and genetic engineering techniques, combined with molecular nanotech breakthroughs (the much hyped nanorobots whizzing through our veins), will provide a radical extension of human life.
We asked “If your mind was perfectly copied to a new body…” who would the mindclones be, and who would own your stuff? The 165 of you who answered were almost perfectly split three ways on this old debate about personal identity.
The year is 2025 and there’s a raging snow storm outside. The world is a pale shade of white and gray. You wake up and instinctively look around the bedroom to locate the amber dot glowing on your G-Glass iteration #4 (4th generation upgrade) visor.
It's another blow for immersive virtual reality. University of California researchers have shown that even people with perfect eyesight navigate the world by relying on a lot more than what they see. Here's why VR won't really work until we go beyond visual cues and fancy treadmills.
Let’s face it: Technology and etiquette have been colliding for some time now, and things have finally boiled over if the recent spate of media criticisms is anything to go by. There’s the voicemail, not to be left unless you’re “dying.” There’s the e-mail signoff that we need to “kill.” And then there’s the observation that what was once normal — like asking someone for directions — is now considered “uncivilized.”
This year one of the more thought provoking thought experiments to appear in recent memory has its tenth anniversary. Nick Bostrom’s paper in the Philosophical Quarterly “Are You Living in a Simulation?”” might have sounded like the types of conversations we all had after leaving the theater having seen The Matrix, but Bostrom’s attempt was serious. (There is a great recent video of Bostrom discussing his argument at the IEET). What he did in his paper was create a formal argument around the seemingly fanciful question of whether or not we were living in a simulated world. Here is how he stated it…
Today, drones, eldercare and pets. Tomorrow, household servants, love partners and much more. Although some people might find the idea of love with a machine repulsive, experts predict that as the technology advances and robots become more human-like, we will view our silicon cousins in a friendlier light.