I had the opportunity to see Wally Pfister’s Transcendence, with Johnny Depp, Rebecca Hall, and Morgan Freeman, only last week, more than three months after the film’s release in theaters. Before seeing the film I satisfied my Transcendence cravings with an old, still unnamed copy of Jack Paglen’s script that can be found online (it appears that Paglen’s screenplay was part of what is known as the Black List, a list of popular but unproduced screenplays in Hollywood).
Human beings seem to have an innate need to predict the future. We’ve read the entrails of animals, thrown bones, tried to use the regularity or lack of it in the night sky as a projection of the future and omen of things to come, along with a thousand others kinds of divination few of us have ever heard of. This need to predict the future makes perfect sense for a creature whose knowledge bias is towards the present and the past. Survival means seeing enough ahead to avoid dangers, so that an animal that could successfully predict what was around the next corner could avoid being eaten or suffering famine.
Should animals be permitted to hunt and kill other animals? Some futurists believe that humans should intervene, and solve the “problem” of predator vs. prey once and for all. We talked to the man who wants to use radical ecoengineering to put an end to the carnage. A world without predators certainly sounds extreme, and it is. But British philosopher David Pearce can’t imagine a future in which animals continue to be trapped in the never-ending cycle of blind Darwinian processes.
George Slusser is Professor Emeritus of Comparative Literature at the University of California in Riverside (UCR, CA, U.S.A.), Ph.D., Comparative Literature (Harvard University),the first Curator (Emeritus) of the J. Lloyd Eaton Collection of Science Fiction &Fantasy Utopian and Horror Literature (UCR, CA, U.S.A. – the world’s biggest SF collection), Harvard Traveling Fellow, Fulbright Lecturer, Coordinator of twenty three Eaton SF Conferences, Author of numerous books, studies and articles in the science fiction studies domain.
Maria Konovalenko discusses personalized medicine services, why you should participate in clinical trials of geroprotector drug candidates, Personalized science, Why scientific research should be organized, why you should be friends with people with no harmful habits, “Create crowdfunding campaigns in the area of longevity”, why you should increase your own competence, promote the value of human longevity, and neuropreservation.
This is the second post in my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies. In the previous post, I looked at Bostrom’s defence of the orthogonality thesis. This thesis claimed that pretty much any level of intelligence — when “intelligence” is understood as skill at means-end reasoning — is compatible with pretty much any (final) goal. Thus, an artificial agent could have a very high level of intelligence, and nevertheless use that intelligence to pursue very odd final goals, including goals that are inimical to the survival of human beings. In other words, there is no guarantee that high levels of intelligence among AIs will lead to a better world for us.
In this entry, I take a look at Bostrom’s orthogonality thesis. As we shall see, this thesis is central to his claim that superintelligent AIs could pose profound existential risks to human beings. But what does the thesis mean and how plausible is it?
Lately, I’ve been enjoying reruns of the relatively new BBC series Sherlock, starring Benedict Cumberbatch, which imagines Arthur Conan Doyle’s famous detective in our 21st century world. The thing I really enjoy about the show is that it’s the first time I can recall that anyone has managed to make Sherlock Holmes funny without at the same time undermining the whole premise of a character whose purely logical style of thinking make him seem more a robot than a human being.
While at conferences and doing research and writing over the past couple of years, I’ve noticed a lot of confusion about the terms “posthuman,” “transhuman,” and “posthumanism.” A lot of people—including scholars who should know better—use these terms pretty much interchangeably and indiscriminately. Part of the problem is that these terms are all fairly new. So for clarity’s sake, I offer these simple thumbnail definitions of all three terms…
So I finally got around to reading Max Tegmark’s book Our Mathematical Universe, and while the book answered the question that had led me to read it, namely, how one might reconcile Plato’s idea of eternal mathematical forms with the concept of multiple universes, it also threw up a whole host of new questions. This beautifully written and thought provoking book made me wonder about the future of science and the scientific method, the limits to human knowledge, and the scientific, philosophical and moral meaning of various ideas of the multiverse.
Most broadly, Social Futurism stands for positive social change through technology; i.e. to address social justice issues in radically new ways which are only just now becoming possible thanks to technological innovation. If you would like some introduction to Social Futurist ideas, you can read the introduction page at wavism.net and there are links to articles at http://IEET.org listed at the top of this post. In this post I will discuss the Social Futurist alternative to Liberal Democratic and Authoritarian states, how that model fits with our views on decentralization and subsidiarity, and its relevance to the political concept of a “Third Way“.
Although Jibo, designed by MIT professor Cynthia Breazeal to be the “world’s first family robot,” isn’t set to ship until 2015, folks are already excited about this little bot with a “big personality.” While there’s much to be said for Breazeal’s vision of “humanizing technology” so that the smart home of the future doesn’t “feel cold and computerized,” we might want to pause a bit before rushing to build the type of world depicted in the movieHer. Although it is easy to imagine we’ll be better off when we’ve got less to do, we don’t actually know the existential and social implications ofoutsourcing ever-more intimate tasks to technology.
Overview of Advances Articulated in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013)  This article provides an overview of the research findings related to cognitive enhancement that are presented in Nanomedical Device and Systems Design: Challenges, Possibilities, Visions (2013), an encyclopedic textbook chronicling a plethora of recent advances in myriad areas of nanotechnology and nanomedicine. The final chapter discusses progress in nanomedical cognitive enhancement, where we find ourselves in a modern era in which many technologies appear to be on the cusp – helping to resolve pathologies while also having much future potential for the augmentation of human capabilities.
Geoengineering has come under attack recently by conspiracy theorists, scientists, to “greens.” There have been many kinds of proposals for geoengineering, and even a legal/illegal experiment pouring 200,000 pounds of iron sulfate into the North Pacific which was supposed to increase plankton that would absorb carbon dioxide. The experiment did not work and pissed off a lot of scientists. China also recently stopped their “flattening of mountains.” Therefore this article is not purely about techniques of combating global warming, but about the need for people to understand that geoengineering is a must, not only a must, but also a “human right.”
Imagine if you could take an exotic vacation billions of light years from Earth, peek in on the dinosaurs’ first-hand, or jump into a parallel universe where another you is living a more exciting life than yours; and you could swap places if you like.
If predictions by future thinkers such as Aubrey de Grey, Robert Freitas, and Ray Kurzweil ring true – that future science will one day eliminate the disease of aging – then it makes sense to consider the repercussions a non-aging society might place on our world.
Although a study from the Oxford Martin Programme on the Impacts of Future Technology suggests that nearly half of U.S. jobs could be at risk of computerization over the next two decades, this does not necessarily need to be bad news, says futurist Thomas Frey in a recent Futurist Magazine essay.
Over the past few weeks, a question we have faced before as a species reared its head once again: Should we destroy the last known samples of smallpox on Earth? The answer might seem obvious, may not even seem to require a second thought: Of course we eradicate smallpox! What good is it? One question I would ask in response is: What kind of species do we want to be?
Most transhumanists are already familiar with digitalism, even if they haven’t heard the name. Digitalism uses ideas from computer science to develop new ways of thinking about old topics. Writers like Ed Fredkin, Hans Moravec, Frank Tipler, Nick Bostrom, and Ray Kurzweil are digitalists. Typically, digitalists are scientists, rationalists, naturalists, and atheists. Nevertheless, they have worked out novel and deeply meaningful ways of thinking about things like ghosts, souls, gods, resurrection, and reincarnation.
Some futurists and science fiction writers predict that we’re on the cusp of a world-changing “Technological Singularity.” Skeptics say there will be no such thing. Today, I’ll be debating author Ramez Naam about which side is right.
Many people associate transhumanism—the field of using science and technology to radically alter and improve the human being—with scientists, technologists and futurists. Historically, this has been quite correct. However, today, the transhumanist movement is on the verge of going mainstream. Mentions of the movement in the press have skyrocketed recently.
The Borg are the true villains of the Star Trek universe. True, the Klingons are warlike and jingoistic, the Romulans are devious and isolationist, and the Cardassians are just plain devious, but their methods and motivations are, for want of a better word, all too human-like. The Borg are truly alien: a hive-like superorganism, bent upon assimilating every living thing into their collective mind. To hardy individualists, this is the epitome of evil.
Positive future watchers believe we will see more progress in the next three decades than was experienced over the last 200 years. In The Singularity is Near, author Ray Kurzweil reveals how science will change the ways we live, work, and play. The following timeline looks at some amazing possibilities as we venture ahead in what promises to become an incredible future…
The ethics of the future could likely shift to one of immanence. In philosophy, immanence means situations where everything comes from within a system, world, or person, as opposed to transcendence, where there are externally-determined specifications.
Communication is the basic principle of social interaction. We know that microbes use a method of communication called quorum sensing1, cetaceans have their whale song2, plants have airborne chemical communication and fungal signal transfer via their roots3. Let us take a moment to think about how do machines communicate with each other.
Whether you consider yourself a futurist, a technoprogressive, a Transhumanist, we all recognize the ongoing neglect by mainstream media, Hollywood, and other prominent media institutions in regards to a growing realization – the concepts of both work and death are changing before our very eyes! From technological unemployment now starting to affect workers in the industrial nations, to the international scientific community becoming more involved in anti-aging research, it’s quite clear that our near future may see the destruction of what we consider “working” and “dying.”