As we head deeper into the 21st century, we’re starting to catch a glimpse of the fantastic technological possibilities that await. But we’re also starting to get a grim sense of the potential horrors. Here are 10 frightening technologies that should never, ever, come into existence.
An important question regarding human enhancement in the military is how the deployment of modified soldiers will redefine the ethical limitations on how combatants may be treated. The provisions of the Geneva Conventions and other bodies of international law prohibiting torture generally rest on certain assumptions about the human condition, such as pain thresholds, sleep requirements, and other forms of fragility.
This isn’t a complete review of Nick Bostrom’s Superintelligence (2014), but a summary of the thoughts that came to my mind while and after reading the book. Superintelligence: Paths, Dangers, Strategies (2014) opens with a cautionary fable: a group of sparrows consider finding an owl to assist and protect them. Only the more cautious sparrows see the danger – that the owl may eat them all if they don’t find out how to tame an owl first – and Bostrom dedicates the book to them (and of course to the cautious humans afraid that superintelligent life forms may destroy humanity if we don’t find out how to control them first).
Last week, I published a guest post at Wired UK called It's Time to Consider Restricting Human Breeding. It was an opinion article that generated many commentary stories, over a thousand comments across the web, and even a few death threats for me.
This is the sixth part in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. The series is covering those parts of the book that most interest me. This includes the sections setting out the basic argument for thinking that the creation of superintelligent AI could threaten human existence, and the proposed methods for dealing with that threat.
I had the opportunity to see Wally Pfister’s Transcendence, with Johnny Depp, Rebecca Hall, and Morgan Freeman, only last week, more than three months after the film’s release in theaters. Before seeing the film I satisfied my Transcendence cravings with an old, still unnamed copy of Jack Paglen’s script that can be found online (it appears that Paglen’s screenplay was part of what is known as the Black List, a list of popular but unproduced screenplays in Hollywood).
Human beings seem to have an innate need to predict the future. We’ve read the entrails of animals, thrown bones, tried to use the regularity or lack of it in the night sky as a projection of the future and omen of things to come, along with a thousand others kinds of divination few of us have ever heard of. This need to predict the future makes perfect sense for a creature whose knowledge bias is towards the present and the past. Survival means seeing enough ahead to avoid dangers, so that an animal that could successfully predict what was around the next corner could avoid being eaten or suffering famine.
Should animals be permitted to hunt and kill other animals? Some futurists believe that humans should intervene, and solve the “problem” of predator vs. prey once and for all. We talked to the man who wants to use radical ecoengineering to put an end to the carnage. A world without predators certainly sounds extreme, and it is. But British philosopher David Pearce can’t imagine a future in which animals continue to be trapped in the never-ending cycle of blind Darwinian processes.
George Slusser is Professor Emeritus of Comparative Literature at the University of California in Riverside (UCR, CA, U.S.A.), Ph.D., Comparative Literature (Harvard University),the first Curator (Emeritus) of the J. Lloyd Eaton Collection of Science Fiction &Fantasy Utopian and Horror Literature (UCR, CA, U.S.A. – the world’s biggest SF collection), Harvard Traveling Fellow, Fulbright Lecturer, Coordinator of twenty three Eaton SF Conferences, Author of numerous books, studies and articles in the science fiction studies domain.
Maria Konovalenko discusses personalized medicine services, why you should participate in clinical trials of geroprotector drug candidates, Personalized science, Why scientific research should be organized, why you should be friends with people with no harmful habits, “Create crowdfunding campaigns in the area of longevity”, why you should increase your own competence, promote the value of human longevity, and neuropreservation.
This is the second post in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. In the previous post, I looked at Bostrom’s defence of the orthogonality thesis. This thesis claimed that pretty much any level of intelligence — when “intelligence” is understood as skill at means-end reasoning — is compatible with pretty much any (final) goal. Thus, an artificial agent could have a very high level of intelligence, and nevertheless use that intelligence to pursue very odd final goals, including goals that are inimical to the survival of human beings. In other words, there is no guarantee that high levels of intelligence among AIs will lead to a better world for us.
In this entry, I take a look at Bostrom’s orthogonality thesis. As we shall see, this thesis is central to his claim that superintelligent AIs could pose profound existential risks to human beings. But what does the thesis mean and how plausible is it?
Lately, I’ve been enjoying reruns of the relatively new BBC series Sherlock, starring Benedict Cumberbatch, which imagines Arthur Conan Doyle’s famous detective in our 21st century world. The thing I really enjoy about the show is that it’s the first time I can recall that anyone has managed to make Sherlock Holmes funny without at the same time undermining the whole premise of a character whose purely logical style of thinking make him seem more a robot than a human being.
While at conferences and doing research and writing over the past couple of years, I’ve noticed a lot of confusion about the terms “posthuman,” “transhuman,” and “posthumanism.” A lot of people—including scholars who should know better—use these terms pretty much interchangeably and indiscriminately. Part of the problem is that these terms are all fairly new. So for clarity’s sake, I offer these simple thumbnail definitions of all three terms…
So I finally got around to reading Max Tegmark’s book Our Mathematical Universe, and while the book answered the question that had led me to read it, namely, how one might reconcile Plato’s idea of eternal mathematical forms with the concept of multiple universes, it also threw up a whole host of new questions. This beautifully written and thought provoking book made me wonder about the future of science and the scientific method, the limits to human knowledge, and the scientific, philosophical and moral meaning of various ideas of the multiverse.
Most broadly, Social Futurism stands for positive social change through technology; i.e. to address social justice issues in radically new ways which are only just now becoming possible thanks to technological innovation. If you would like some introduction to Social Futurist ideas, you can read the introduction page at wavism.net and there are links to articles at http://IEET.org listed at the top of this post. In this post I will discuss the Social Futurist alternative to Liberal Democratic and Authoritarian states, how that model fits with our views on decentralization and subsidiarity, and its relevance to the political concept of a “Third Way“.
Although Jibo, designed by MIT professor Cynthia Breazeal to be the “world’s first family robot,” isn’t set to ship until 2015, folks are already excited about this little bot with a “big personality.” While there’s much to be said for Breazeal’s vision of “humanizing technology” so that the smart home of the future doesn’t “feel cold and computerized,” we might want to pause a bit before rushing to build the type of world depicted in the movieHer. Although it is easy to imagine we’ll be better off when we’ve got less to do, we don’t actually know the existential and social implications ofoutsourcing ever-more intimate tasks to technology.
Overview of Advances Articulated in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013)  This article provides an overview of the research findings related to cognitive enhancement that are presented in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013), an encyclopedic textbook chronicling a plethora of recent advances in myriad areas of nanotechnology and nanomedicine. The final chapter discusses progress in nanomedical cognitive enhancement, where we find ourselves in a modern era in which many technologies appear to be on the cusp – helping to resolve pathologies while also having much future potential for the augmentation of human capabilities.
Geoengineering has come under attack recently by conspiracy theorists, scientists, to “greens.” There have been many kinds of proposals for geoengineering, and even a legal/illegal experiment pouring 200,000 pounds of iron sulfate into the North Pacific which was supposed to increase plankton that would absorb carbon dioxide. The experiment did not work and pissed off a lot of scientists. China also recently stopped their “flattening of mountains.” Therefore this article is not purely about techniques of combating global warming, but about the need for people to understand that geoengineering is a must, not only a must, but also a “human right.”
Imagine if you could take an exotic vacation billions of light years from Earth, peek in on the dinosaurs’ first-hand, or jump into a parallel universe where another you is living a more exciting life than yours; and you could swap places if you like.
If predictions by future thinkers such as Aubrey de Grey, Robert Freitas, and Ray Kurzweil ring true – that future science will one day eliminate the disease of aging – then it makes sense to consider the repercussions a non-aging society might place on our world.
Although a study from the Oxford Martin Programme on the Impacts of Future Technology suggests that nearly half of U.S. jobs could be at risk of computerization over the next two decades, this does not necessarily need to be bad news, says futurist Thomas Frey in a recent Futurist Magazine essay.
Over the past few weeks, a question we have faced before as a species reared its head once again: Should we destroy the last known samples of smallpox on Earth? The answer might seem obvious, may not even seem to require a second thought: Of course we eradicate smallpox! What good is it? One question I would ask in response is: What kind of species do we want to be?
Most transhumanists are already familiar with digitalism, even if they haven’t heard the name. Digitalism uses ideas from computer science to develop new ways of thinking about old topics. Writers like Ed Fredkin, Hans Moravec, Frank Tipler, Nick Bostrom, and Ray Kurzweil are digitalists. Typically, digitalists are scientists, rationalists, naturalists, and atheists. Nevertheless, they have worked out novel and deeply meaningful ways of thinking about things like ghosts, souls, gods, resurrection, and reincarnation.
Some futurists and science fiction writers predict that we’re on the cusp of a world-changing “Technological Singularity.” Skeptics say there will be no such thing. Today, I’ll be debating author Ramez Naam about which side is right.
Many people associate transhumanism—the field of using science and technology to radically alter and improve the human being—with scientists, technologists and futurists. Historically, this has been quite correct. However, today, the transhumanist movement is on the verge of going mainstream. Mentions of the movement in the press have skyrocketed recently.