(Michael Anissimov is guest blogging at Sentient Developments this month.) In considering a possible transhumanist future where cybernetic implants and other enhancements must be designed for the use of billions of people, I worry about an associated slaughter—that of all the animals that must be used as test subjects for the enhancements before they can be made to work. Considering that some modifications will surely involve replacing the limbs, organs, and just about every part of the body, and always be accompanied by the risk of immune rejection, it seems heartless to subject millions of test animals to excruciating death by torture.
I noticed that a post of mine was linked via the Wikipedia article on post-scarcity — my post about nanofactory regulation. In it, I proposed a DRM-like system to prevent any old nanofactory from manufacturing things like bombs. Radical and Luddite, I know.
I’m reading the blog of Wesley Smith, a bioethicist with the Discovery Institute. He mentions transhumanism frequently: at least 212 posts. Unlike Charles Stross, he does seem to believe that the 21st century could bring radical changes with the manipulation of human beings and the creation of new human-like life-forms, including AGIs — he just doesn’t think we should go down that route.
Universal mind uploading, or universal uploading for short, is the concept, by no means original to me, that the technology of mind uploading will eventually become universally adopted by all who can afford it, similar to the adoption of modern agriculture, hygiene, or living in houses.
No, I’m not talking about Heat Death or another Ice Age. I’m talking about what could happen if mind uploading becomes universally or near-universally adopted and every mind is accelerated by a factor of several million or billion. Such an outcome seems inevitable if mind uploading is actually possible.
For those who believe that human-level AI isn’t far off and that a rosy scenario isn’t inevitable, 2009 is a somewhat sad and depressing time. Popular opinion is that AI won’t be here for centuries, but that isn’t a huge problem or issue. (In fact, it makes things easier by limiting the number of people involved in AI research, thus allowing me and my confederates to keep a closer eye on them.)
As I see it there are three main categories of risk: bio, nano, and AI/robotics. These man-made risks make up the vast majority of the threat magnitude over the coming century and deserve most of the attention.
Dr. Phineas Waldolf Steel is a mentally twisted but awe-inspiring figure whose interests span the production of propaganda, the construction of chronically malfunctioning robots, puppet shows, and an ongoing attempt to become World Emperor for the purpose of turning this planet into a Utopian Playland.
Leon Kass, the scientific community frowns on your deathist shenanigans and paternalistic tomfoolery. We will continue to denounce your anti-freedom, control-freak bioethical views until the day your theocon allies are booted out of the White House, which will occur on January 20, 2009. Enjoy your eight months.
To put it in a single sentence, I’d say that it’s because only a minority of cognitively possible goal sets place a high priority on the continued survival of human beings and the structures we value. Another reason is that we can’t specify what we value in enough mathematical detail to transfer it to a new species without a lot of requisite hassle.
Despite being a transhumanist who wants to transcend my boundaries, I agree strongly with the need for limits and constraints as we move towards increasingly transformative technologies. For some, “no limits, yay!” is the rallying call, but I look at the situation from a thermodynamic, not political, perspective.