Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.
Imagine the day when we finally receive a signal from an extraterrestrial intelligence, only to find that there’s a message embedded within. Given that we don’t speak the same language, how could we ever hope to make sense of it? We spoke to the experts to find out.
Communication with Extraterrestrial Intelligence, aka “CETI”, is the branch of SETI concerned with both the transmission and reception of messages between ourselves and an alien civilization. Scientists have been trying to detect signals from an extraterrestrial intelligence (ETI) since the 1960s, but haven’t found anything.
“For the modern mad men and wolves of Wall Street, gone are the days of widespread day drinking and functional cocaine use. Instead, in this age of efficiency above all else, corporate climbers sometimes seek a simple brain boost, something to help them to get the job done without manic jitters or a nasty crash.
For that, they are turning to nootropics,” writes Jack Smith IV on the cover story for an April 2015 edition of the New York Observer.
This restructuring of how to view science is geared not just at defending science from charges of reactionism from leftists, but at more broadly clarifying how we might view that much looser bundle invoked by the word “science” as a political force. Because the array of things popularly associated with “science” is so wildly varying and hazy most of the political claims surrounding science that don’t slice it away to near irrelevance or neutrality as a formulaic procedure have sought to identify underlying ideological commitments and then define “science” in terms of them.
The fact of the matter is that the remarkably successful phenomenon that the term “Science!” has wrapped itself around is not so much a methodology as an orientation. What was really going on, what is still going on in science that has given it so many great insights is the radicalism of scientists, that is to say their vigilant pursuit after the roots (or ‘radis’). Radicals constantly push our perspectives into extreme or alien contexts until they break or become littered with unwieldy complications, and when such occurs we are happy to shed off the historical baggage entirely and start anew. To not just add caveats upon caveats to an existing model but to sometimes prune them away or throw it all out entirely. Ours is the search for patterns and symmetries that might reflect more universal dynamics rather than merely good rules of thumb within a specific limited context. As any radical knows “good enough” is never actually enough.
With the launch of Sputnik in 1957, humankind extended its presence from the Earth’s surface towards outer space. Since that time, thousands of other objects have been sent into Earth orbit, including weather satellites, communications equipment and military hardware. Wherever people go, they tend to leave their mark, mostly harmful, on the natural environment, and space is no exception. There are many pieces of space junk – the remains of discarded, malfunctioning or obsolete devices – that now whiz around the earth and pose threats to current space projects.
IEET Fellow Wendell Wallach recently co-published an article in the National Academy of Sciences‘ ISSUES in Science and Technology journal, with ASU law professor Gary E. Marchant, The piece is entitled Coordinating Technology Governance and it explores the need for, and application of, a nimble authoritative coordinating body, referred to as a Governance Coordination Committee, to fill an urgent gap with regard to the assessment of the ethical, legal, social and economic consequences of emerging technologies.
After several decades of relative obscurity Transhumanism as a philosophical and technological movement has finally begun to break out of its strange intellectual ghetto and make small inroads into the wider public consciousness. This is partly because some high profile people have either adopted it as their worldview or alternatively warned against its potential dangers. Indeed, the political scientist Francis Fukuyama named it “The world’s most dangerous idea” in a 2004 article in the US magazine Foreign Policy, and Transhumanism’s most outspoken publicist, Ray Kurzweil, was recently made director of engineering at Google, presumably to hasten Transhumanism’s goals.
Skeptics of renewables sometimes cite data from EIA (The US Department of Energy’s Energy Information Administration) or from the IEA (the OECD’s International Energy Agency). The IEA has a long history of underestimating solar and wind that I think is starting to be understood.
This map shows that AI failure resulting in human extinction could happen on different levels of AI development, namely:
1. before it starts self-improvement (which is unlikely but we still can envision several failure modes),
2. during its take off, when it uses different instruments to break out from its initial confinement, and
3. after its successful takeover of the world, when it starts to implement its goal system which could be unfriendly, or its friendliness may be flawed.
The halcyon days of the mid-20th century, when researchers at the (in?)famous Dartmouth summer school on AI dreamed of creating the first intelligent machine, seem so far away. Worries about the societal impacts of artificial intelligence (AI) are on the rise. Recent pronouncements from tech gurus like Elon Musk and Bill Gates have taken on a dramatically dystopian edge. They suggest that the proliferation and advance of AI could pose a existential threat to the human race.
One can’t help be positive about the future. Even obstacles have a bright side. For example - humans at some point will be limited by space and time; we can’t expect to go far in space exploration without the development of strong artificial intelligence and robots.
Futurists like Ray Kurzweil believe that advancements in the field of artificial intelligence will culminate to a point in the near future to allow humans to transcend their biological form. This is what he calls the Singularity and he describes it as follows:
My plan below needs to be perceived with irony because it is almost irrelevant: we have only a very small chance of surviving the next 1000 years. If we do survive, we have numerous tasks to accomplish before my plan can become a reality.
Additionally, there’s the possibility that the “end of the universe” will arrive sooner, if our collider experiments lead to a vacuum phase transition, which begins at one point and spreads across the visible universe.
We started to discuss Stevenson’s probe — a hypothetical vehicle which could reach the earth’s core by melting its way through the mantle, taking scientific instruments with it. It would take the form of a large drop of molten iron – at least 60,000 tons – theoretically feasible, but practically impossible.
Since their inception 60 years ago, satellites have gone on to become an indispensable component of our modern high-tech civilization. But because they’re reliable and practically invisible, we take their existence for granted. Here’s what would happen if all our satellites suddenly just disappeared.
The idea that all the satellites — or at least good portion of them — could be rendered inoperable is not as outlandish as it might seem at first. There are at least three plausible scenarios in which this could happen.
On Friday March 6, 2015, more than 3,000 people attended the ASU Emerge event. This is where Eric Kingsbury, futurist, founder of KITEBA, cofounder of the Confluence Project, launched “You Have Been Inventoried”. I helped with some of the content for the project, along with others from the Confluence Project.
Everywhere you look in the world you can see pessimism, gloom, doom and negativity. No matter where you live, it seems many are convinced that there’s just no hope. Many people have stopped trying to do anything, while they “wait for god” or “wait for the Singularity.” Or simply wait, period.
The negativity is everywhere.
So, here’s one of my rants, against that negativity.
There has of late been a great deal of ink devoted to concerns about artificial intelligence, and a future world where machines can “think,” where the latter term ranges from simple autonomous decision-making to full fledged self-awareness. I don’t share most of these concerns, and I am personally quite excited by the possibility of experiencing thinking machines, both for the opportunities they will provide for potentially improving the human condition, to the insights they will undoubtedly provide into the nature of consciousness.
This article examines the risks posed by “unknown unknowns,” which I call monsters. It then introduces a taxonomy of the unknowable, and argues that one category of this taxonomy in particular should lead us to inflate our prior probability estimates of annihilation, whatever they happen to be. The lesson here is ultimately the same as the Doomsday Argument, except the reasoning is far more robust.
If asked to rank humanity’s problems by severity, I would give the silver medal to the need to spend so much time doing things that give us no fulfillment—work, in a word. I consider that the ultimate goal of artificial intelligence is to hand off this burden, to robots that have enough common sense to perform those tasks with minimal supervision.
But some AI researchers have altogether loftier aspirations for future machines: they foresee computer functionality that vastly exceeds our own in every sphere of cognition. Such machines would not only do things that people prefer not to; they would also discover how to do things that no one can yet do. This process can, in principle, iterate—the more such machines can do, the more they can discover.
What’s not to like about that? Why do I NOT view it as a superior research goal than machines with common sense (which I’ll call “minions”)?
Technological change is accelerating and transforming our world. Assuming trends persist, we will soon experience an evolutionary shift in the mechanisms of reputation, a fundamental on which relationships are based. Cascading effects of the shift will revolutionize the way we relate with each other and our machines, incentivizing unprecedented degrees of global cooperation.
In 2015, you probably have more computing power than that of the Apollo Guidance computer in your smartphone, and yet Moore’s Law continues unabated at its fiftieth anniversary. Machines are becoming faster and smaller and smarter.
What makes a Seattle mother spend her days trying to chip away at Bible belief rather than digging holes in the garden?
When my husband sent me the Pew Report news that the percent of Americans who call themselves Christian has dropped from 78.4 to 70.6 over the last 7 years, I responded jokingly with six words: You’re welcome. Molly Moon’s after dinner?
Not that I actually claim credit for the decline. As they say, it takes a village.
The civilized world has an ever-intensifying relationship to automated computer technology. It is involved in nearly everything we do, every day, from the time we wake to the time we go to sleep. Why, then, does so much of our entertainment reflect a deep-set fear of technology and its potential for failure?
In response to pressure from the advocacy group As You Sow, Dunkin’ Brands has announced that it will be removing allegedly “nano” titanium dioxide from Dunkin’ Donuts’ powdered sugar donuts. As You Sow claims there are safety concerns around the use of the material, while Dunkin’ Brands cites concerns over investor confidence. It’s a move that further confirms the food sector’s conservatism over adopting new technologies in the face of public uncertainty. But how justified is it based on what we know about the safety of nanoparticles?
From Our Final Hour: A Scientist’s Warning by Martin Rees, Royal Society Professor at Cambridge and England’s Royal Astronomer. “Twenty-first century science may alter human beings themselves - not just how they live.” (9) Rees accepts the common wisdom that the next hundred years will see changes that dwarf those of the past thousand years, but he is skeptical about specific predictions.