In this entry, I take a look at Bostrom’s orthogonality thesis. As we shall see, this thesis is central to his claim that superintelligent AIs could pose profound existential risks to human beings. But what does the thesis mean and how plausible is it?
My goal in this article is to demolish the AI Doomsday scenarios that are being heavily publicized by the Machine Intelligence Research Institute, the Future of Humanity Institute, and others, and which have now found their way into the farthest corners of the popular press. These doomsday scenarios are logically incoherent at such a fundamental level that they can be dismissed as extremely implausible - they require the AI to be so unstable that it could never reach the level of intelligence at which it would become dangerous. On a more constructive and optimistic note, I will argue that even if someone did try to build the kind of unstable AI system that might lead to one of the doomsday behaviors, the system itself would immediately detect the offending logical contradiction in its design, and spontaneously self-modify to make itself safe.
Over the past few days, the interweb’s been awash with virtual “oohs” and “ahs” over Surrey Nanosystems’ carbon nanotube-based Vantablack coating. The material – which absorbs over 99.9% of light falling onto it and is claimed to be the world’s darkest material – is made up of a densely packed “forest” of vertically aligned carbon nanotubes (see the image below). In fact the name “vanta” stands for Vertically Aligned NanoTube Array.
Geoengineering has come under attack recently by conspiracy theorists, scientists, to “greens.” There have been many kinds of proposals for geoengineering, and even a legal/illegal experiment pouring 200,000 pounds of iron sulfate into the North Pacific which was supposed to increase plankton that would absorb carbon dioxide. The experiment did not work and pissed off a lot of scientists. China also recently stopped their “flattening of mountains.” Therefore this article is not purely about techniques of combating global warming, but about the need for people to understand that geoengineering is a must, not only a must, but also a “human right.”
It could be difficult for human civilization to survive a global catastrophe like rapid climate change, nuclear war, or a pandemic disease outbreak. But imagine if two catastrophes strike at the same time. The damages could be even worse. Unfortunately, most research only looks at one catastrophe at a time, so we have little understanding of how they interact.
Over the spring the Fundamental Questions Institute (FQXi) sponsored an essay contest the topic of which should be dear to this audience’s heart- How Should Humanity Steer the Future? I thought I’d share some of the essays I found most interesting, but there are lots, lots, more to check out if you’re into thinking about the future or physics, which I am guessing you might be.
Imagine if you could take an exotic vacation billions of light years from Earth, peek in on the dinosaurs’ first-hand, or jump into a parallel universe where another you is living a more exciting life than yours; and you could swap places if you like.
Somewhere around a dozen years ago, I was sitting in a bar in Eastern Washington. It could have been Lake Chelan or Yakima. I really don’t remember. But I do remember meeting two cowboys. Real cowboys (we still have them in the west). They weren’t talking about herds of cows over their beers. They were talking about fires.
Communication is the basic principle of social interaction. We know that microbes use a method of communication called quorum sensing1, cetaceans have their whale song2, plants have airborne chemical communication and fungal signal transfer via their roots3. Let us take a moment to think about how do machines communicate with each other.
For anyone thinking about the future relationship between nature-man-machines I’d like to make the case for the inclusion of an insightful piece of fiction to the canon. All of us have heard of H.G. Wells, Isaac Asimov or Arthur C. Clarke. And many, though perhaps fewer, of us have likely heard of fiction authors from the other side of the nature/technology fence, writers like Mary Shelley, or Ursula Le Guin, or nowadays, Paolo Bacigalupi, but certainly almost none of us have heard of Samuel Butler, or better, read his most famous novel Erewhon (pronounced with 3 short syllables E-re-Whon.)
When I was around nine years old I got a robot for Christmas. I still remember calling my best friend Eric to let him know I’d hit pay dirt. My “Verbot” was to be my own personal R2D2. As was clear from the picture on the box, which I again remember as clear as if it were yesterday, Verbot would bring me drinks and snacks from the kitchen on command- no more pestering my sisters who responded with their damned claims of autonomy! Verbot would learn to recognize my voice and might help me with the math homework I hated.
I am sure you have heard it constantly. "Google is (insert fear term here.)" They want to take over the internet, they are building skynet, they are invading our privacy, they are trying to become big brother, etc, etc, ad nausem. Be it Glass, or their recent acquisition of numerous robotics firms, to even hiring Ray Kurzweil, Google has recently been in the news a lot, usually as the big bad boogieman of whatever news story you are reading.
A recent massive leap forward in synthetic life, recently published in Nature, is the expansion of the alphabet of DNA to six letters rather than four, by synthetic biologists – the technicians to whom we entrust the great task of reprogramming life itself.
At the White House Correspondents Dinner - the annual opportunity for the President to engage directly, and humorously, with reporters who cover him - I expect most of the gibes to be at the President. Sure, he gets the chance to defend himself, but it’s pretty much a roast: a leading comedian is invited every year to make jokes, while the Commander in Chief tries to laugh instead of squirm.
This is the third part of my series on Nicholas Agar’s bookTruly Human Enhancement. As mentioned previously, Agar stakes out an interesting middle ground on the topic of enhancement. He argues that modest forms of enhancement — i.e. up to or slightly beyond the current range of human norms — are prudentially wise, whereas radical forms of enhancement — i.e. well beyond the current range of human norms — are not. His main support for this is his belief that in radically enhancing ourselves we will lose certain internal goods. These are goods that are intrinsic to some of our current activities.
It’s been 50 years since Isaac Asimov devised his famous Three Laws of Robotics — a set of rules designed to ensure friendly robot behavior. Though intended as a literary device, these laws are heralded by some as a ready-made prescription for avoiding the robopocalypse. We spoke to the experts to find out if Asimov's safeguards have stood the test of time — and they haven't.
The first time I encountered the claim that an anarchistic society would impede scientific progress I was too shocked — and later busy chortling — to sketch out a thorough response. It’s a surprising sentiment to me for a lot of reasons, not the least for the well known correspondence between scientific progress and social and material freedom in mass societies.
Are we headed for a Singularity? Is it imminent? I write relatively near-future science fiction that features neural implants, brain-to-brain communication, and uploaded brains. I also teach at a place called Singularity University. So people naturally assume that I believe in the notion of a Singularity and that one is on the horizon, perhaps in my lifetime.
Yes. Yes we can. The last year has brought with it the revelations of massive government-run domestic spying machineries in the US and UK. On the horizon is more technology that will make it even easier for governments to monitor and track everything that citizens do. Yet I'm convinced that, if we're sufficiently motivated and sufficiently clever, the future can be one of more freedom rather than less.
The science-fiction writer Isaac Asimov famously proposed three laws for all robots to follow: (1) a robot may not attack a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey the orders given to it by a human being except where such orders would conflict with the first rule; (3) a robot must protect its own existence as long as such protection does not conflict with the first two rules. These “laws,” though they sound just and logical, are utterly impossible to implement if the autonomous robots are to be intelligent and able to reprogram themselves.
I just attended the NASA Innovative and Advance Concepts group (NIAC) symposium at Stanford —(I am on NIAC's Council of External Advisors)—watching and appraising and questioning terrific presentations about future-potential "game-changing" space technologies. In four days the recipients of NIAC seed grants, showed us how NASA's small but strategic investments in exceptional… even risky… technologies might prove valuable—even vital—if given a chance.
A senior American spy chief has released his assessment of the most troubling threats facing the US — a list that includes terrorism, hackers, WMD proliferation, pandemics, extreme weather events — and the militarization of space.
The technology world was abuzz last week when Google announced it spent nearly half a billion dollars to acquire DeepMind, a UK-based artificial intelligence (AI) lab. With few details available, commentators speculated on the underlying motivation.
For people in cold climes, winter, with its short days and hibernation inducing frigidity, is a season to let one’s pessimistic imagination roam. It may be overly deterministic, but I often wonder whether those who live in climates that do not vary with the seasons, so that they live where it is almost always warm and sunny, or always cold and grim, experience less often over the course of a year the full spectrum of human sentiments and end up being either too utopian for reality to justify, or too dystopian for those lucky enough to be here and have a world to complain about in the first place.