We started to discuss Stevenson’s probe — a hypothetical vehicle which could reach the earth’s core by melting its way through the mantle, taking scientific instruments with it. It would take the form of a large drop of molten iron – at least 60,000 tons – theoretically feasible, but practically impossible.
Since their inception 60 years ago, satellites have gone on to become an indispensable component of our modern high-tech civilization. But because they’re reliable and practically invisible, we take their existence for granted. Here’s what would happen if all our satellites suddenly just disappeared.
The idea that all the satellites — or at least good portion of them — could be rendered inoperable is not as outlandish as it might seem at first. There are at least three plausible scenarios in which this could happen.
On Friday March 6, 2015, more than 3,000 people attended the ASU Emerge event. This is where Eric Kingsbury, futurist, founder of KITEBA, cofounder of the Confluence Project, launched “You Have Been Inventoried”. I helped with some of the content for the project, along with others from the Confluence Project.
Everywhere you look in the world you can see pessimism, gloom, doom and negativity. No matter where you live, it seems many are convinced that there’s just no hope. Many people have stopped trying to do anything, while they “wait for god” or “wait for the Singularity.” Or simply wait, period.
The negativity is everywhere.
So, here’s one of my rants, against that negativity.
There has of late been a great deal of ink devoted to concerns about artificial intelligence, and a future world where machines can “think,” where the latter term ranges from simple autonomous decision-making to full fledged self-awareness. I don’t share most of these concerns, and I am personally quite excited by the possibility of experiencing thinking machines, both for the opportunities they will provide for potentially improving the human condition, to the insights they will undoubtedly provide into the nature of consciousness.
This article examines the risks posed by “unknown unknowns,” which I call monsters. It then introduces a taxonomy of the unknowable, and argues that one category of this taxonomy in particular should lead us to inflate our prior probability estimates of annihilation, whatever they happen to be. The lesson here is ultimately the same as the Doomsday Argument, except the reasoning is far more robust.
If asked to rank humanity’s problems by severity, I would give the silver medal to the need to spend so much time doing things that give us no fulfillment—work, in a word. I consider that the ultimate goal of artificial intelligence is to hand off this burden, to robots that have enough common sense to perform those tasks with minimal supervision.
But some AI researchers have altogether loftier aspirations for future machines: they foresee computer functionality that vastly exceeds our own in every sphere of cognition. Such machines would not only do things that people prefer not to; they would also discover how to do things that no one can yet do. This process can, in principle, iterate—the more such machines can do, the more they can discover.
What’s not to like about that? Why do I NOT view it as a superior research goal than machines with common sense (which I’ll call “minions”)?
Technological change is accelerating and transforming our world. Assuming trends persist, we will soon experience an evolutionary shift in the mechanisms of reputation, a fundamental on which relationships are based. Cascading effects of the shift will revolutionize the way we relate with each other and our machines, incentivizing unprecedented degrees of global cooperation.
In 2015, you probably have more computing power than that of the Apollo Guidance computer in your smartphone, and yet Moore’s Law continues unabated at its fiftieth anniversary. Machines are becoming faster and smaller and smarter.
What makes a Seattle mother spend her days trying to chip away at Bible belief rather than digging holes in the garden?
When my husband sent me the Pew Report news that the percent of Americans who call themselves Christian has dropped from 78.4 to 70.6 over the last 7 years, I responded jokingly with six words: You’re welcome. Molly Moon’s after dinner?
Not that I actually claim credit for the decline. As they say, it takes a village.
The civilized world has an ever-intensifying relationship to automated computer technology. It is involved in nearly everything we do, every day, from the time we wake to the time we go to sleep. Why, then, does so much of our entertainment reflect a deep-set fear of technology and its potential for failure?
In response to pressure from the advocacy group As You Sow, Dunkin’ Brands has announced that it will be removing allegedly “nano” titanium dioxide from Dunkin’ Donuts’ powdered sugar donuts. As You Sow claims there are safety concerns around the use of the material, while Dunkin’ Brands cites concerns over investor confidence. It’s a move that further confirms the food sector’s conservatism over adopting new technologies in the face of public uncertainty. But how justified is it based on what we know about the safety of nanoparticles?
From Our Final Hour: A Scientist’s Warning by Martin Rees, Royal Society Professor at Cambridge and England’s Royal Astronomer. “Twenty-first century science may alter human beings themselves - not just how they live.” (9) Rees accepts the common wisdom that the next hundred years will see changes that dwarf those of the past thousand years, but he is skeptical about specific predictions.
In a previous article, I critiqued the two primary definitions of “existential risk” found in the literature, and then hinted at a new definition to replace them. Part of my critique centered on how the relevant group affected by an existential catastrophe is demarcated, e.g., as “our entire species,” “Earth-originating intelligent life,” or “either our current population or some future population of descendants that we value.” (I prefer the latter because it solves the problems of “good” and “bad” extinction that the first two encounter.) I want to put aside the issue of demarcation in this article and focus exclusively on the nature of existential risks themselves (that is, independent of who exactly they impact).
What is an existential risk? The general concept has been around for decades, but the term was coined by Nick Bostrom in his seminal 2002 paper, here. Like so many empirical concepts – from organism to gene to law of nature, all of which are still debated by philosophically-minded scientists and scientifically-minded philosophers – the notion of an existential risk turns out to be more difficult to define than one might at first think.
The challenges of governing emerging technologies are highlighted by the World Economic Forum in the 2015 edition of its Global Risks Report. Focusing in particular on synthetic biology, gene drives and artificial intelligence, the report warns that these and other emerging technologies present hard-to-foresee risks, and that oversight mechanisms need to more effectively balance likely benefits and commercial demands with a deeper consideration of ethical questions and medium to long-term risks.
Getting out of Earth’s gravity well is hard. Conventional rockets are expensive, wasteful, and as we’re frequently reminded, very dangerous. Thankfully, there are alternative ways of getting ourselves and all our stuff off this rock. Here’s how we’ll get from Earth to space in the future.
The chances are that, if you follow news articles about cancer, you’ll have come across headlines like “Most Cancers Caused By Bad Luck” (The Daily Beast) or “Two-thirds of cancers are due to “bad luck,” study finds” (CBS News). The story – based on research out of Johns Hopkins University – has grabbed widespread media attention. But it’s also raised the ire of science communicators who think that the headlines and stories are, in the words of a couple of writers, “just bollocks”.
Looking back on my early experience as a young engineer, I am reminded how little my colleagues and I appreciated that what we did would change the world, for good and for bad. I am also reminded how Marcel Golay, one of my early mentors understood the duality of technology and how this feature plays large in its application for the right purpose.
Communication is the basic principle of social interaction. We know that microbes use a method of communication called quorum sensing1, cetaceans have their whale song2, plants have airborne chemical communication and fungal signal transfer via their roots3. Let us take a moment to think about how do machines communicate with each other.
The question that motivates this essay is “Can we build a benevolent AI, and how do we get around the problem that humans, bless their cotton socks, can’t define ‘benevolence’?” A lot of people want to emphasize just how many different definitions of “benevolence” there are in the world — the point, of course, being that humans are very far from agreeing a universal definition of benevolence, so how can we expect to program something we cannot define into an AI?
It is a risky business trying to predict the future, and although it makes some sense to try to get a handle on what the world might be like in one’s lifetime, one might wonder what’s even the point of all this prophecy that stretches out beyond the decades one is expected to live? The answer I think is that no one who engages in futurism is really trying to predict the future so much as shape it, or at the very least, inspire Noah like preparations for disaster.
As we head deeper into the 21st century, we’re starting to catch a glimpse of the fantastic technological possibilities that await. But we’re also starting to get a grim sense of the potential horrors. Here are 10 frightening technologies that should never, ever, come into existence.
Materials and how we use them are inextricably linked to the development of human society. Yet amazing as historic achievements using stone, wood, metals and other substances seem, these are unbelievably crude compared to the full potential of what could be achieved with designer materials.
Within the next few years, autonomous vehicles—alias robot cars—could be weaponized, the US Federal Bureau of Investigation (FBI) fears. In a recently disclosed report, FBI experts wrote that they believe that robot cars would be “game changing” for law enforcement. The self-driving machines could be professional getaway drivers, to name one possibility. Given the pace of developments on autonomous cars, this doesn’t seem implausible.