Within the next few years, autonomous vehicles—alias robot cars—could be weaponized, the US Federal Bureau of Investigation (FBI) fears. In a recently disclosed report, FBI experts wrote that they believe that robot cars would be “game changing” for law enforcement. The self-driving machines could be professional getaway drivers, to name one possibility. Given the pace of developments on autonomous cars, this doesn’t seem implausible.
The resilience of our entire civilization is increasingly reliant on a fragile network of cell phone towers, which are the first things to fail in any crisis, e.g. a hurricane or other natural disaster… or else deliberate (e.g. EMP or hacker) sabotage.
My goal in this article is to demolish the AI Doomsday scenarios that are being heavily publicized by the Machine Intelligence Research Institute, the Future of Humanity Institute, and others, and which have now found their way into the farthest corners of the popular press. These doomsday scenarios are logically incoherent at such a fundamental level that they can be dismissed as extremely implausible - they require the AI to be so unstable that it could never reach the level of intelligence at which it would become dangerous. On a more constructive and optimistic note, I will argue that even if someone did try to build the kind of unstable AI system that might lead to one of the doomsday behaviors, the system itself would immediately detect the offending logical contradiction in its design, and spontaneously self-modify to make itself safe.
If you get just old enough, one of the lessons living through history throws you is that dreams take a long time to die. Depending on how you date it, communism took anywhere from 74 to 143 years to pass into the dustbin of history, though some might say it is still kicking. The Ptolemaic model of the universe lasted from 100 AD into the 1600′s. Perhaps even more dreams than not simply refuse to die, they hang on like ghost, or ghouls, zombies or vampires, or whatever freakish version of the undead suits your fancy. Naming them would take up more room than I can post, and would no doubt start one too many arguments, all of our lists being different. Here, I just want to make an argument for the inclusion of one dream on our list of zombies knowing full well the dream I’ll declare dead will have its defenders.
Last post we observed the dynamics of the collective in the terms of a small tribe, and indicated that at this size, things worked pretty well. That is not to say that error modes were not possible, but that when error modes arose, there were mechanisms in place to deal with those errors. Essentially, at this scale, the ability of individuals to veil their actions in a wall of secrecy did not exist. While it is certainly possible for the individual to lie, cheat, steal and deceive, such actions could only be carried out to a limited extent, and carried repercussions that were deleterious to that individuals long term well being.
Intrigued by IEET Fellow Patrick Lin’s essay “The Ethics of Autonomous Cars” we asked “Should your robot car sacrifice your life if it will save more lives?” A third of of the 196 of you who responded said no, a third said they should and a third said it should be the driver’s option.
Tyler Cowen points to this great Marc Andreessen interview in the Washington Post that features him saying the following about net neutrality: So, I think the net neutrality issue is very difficult. I think it’s a lose-lose. It’s a good idea in theory because it basically appeals to this very powerful idea of permissionless innovation. But at the same time, I think that a pure net neutrality view is difficult to sustain if you also want to have continued investment in broadband networks.
Most of us in the west were raised with legends, myths and movies that taught Suspicion of Authority (SoA). Thanks to the great science fiction author, George Orwell, we share a compelling metaphor— Big Brother —propelling our fears about a future that may be dominated by tyrants.
For anyone thinking about the future relationship between nature-man-machines I’d like to make the case for the inclusion of an insightful piece of fiction to the canon. All of us have heard of H.G. Wells, Isaac Asimov or Arthur C. Clarke. And many, though perhaps fewer, of us have likely heard of fiction authors from the other side of the nature/technology fence, writers like Mary Shelley, or Ursula Le Guin, or nowadays, Paolo Bacigalupi, but certainly almost none of us have heard of Samuel Butler, or better, read his most famous novel Erewhon (pronounced with 3 short syllables E-re-Whon.)
IEET Fellow Kevin Lagrandeur’s book Androids and Intelligent Networks in Early Modern Literature and Culture: Artificial Slaves has been awarded an Honorable Mention by the Science Fiction and Technoculture Studies .
I am sure you have heard it constantly. "Google is (insert fear term here.)" They want to take over the internet, they are building skynet, they are invading our privacy, they are trying to become big brother, etc, etc, ad nausem. Be it Glass, or their recent acquisition of numerous robotics firms, to even hiring Ray Kurzweil, Google has recently been in the news a lot, usually as the big bad boogieman of whatever news story you are reading.
I’ve looked at data-mining andpredictive analytics before on this blog. As you know, there are many concerns about this type of technology and the increasing role it plays in our lives. Thus, for example, people are concerned about the oftentimes hidden way in which our data is collected prior to being “mined”. And they are concerned about how it is used by governments and corporations to guide their decision-making processes. Will we be unfairly targetted by the data-mining algorithms? Will they exercise too much control over socially important decision-making processes? I’ve reviewed some of these concerns before.
We asked whether “artificial general intelligence with self-awareness” or “uploaded personalities or emulations of human brains” were more of a threat to human beings. Almost three times as many of you thought AGI was more of a threat than uploaded personalities, and overall 62% of the 245 respondents thought one or the other or both were a threat.
Data-mining algorithms are increasingly being used to monitor and enforce governmental policies. For example, they are being used to shortlist people for tax auditing by the revenue services in several countries. They are also used by businesses to identify and target potential customers.
What is a digital trail? How can all your blog posts, photos, opinions, articles, and news affect your personal, professional and academic life? What is happening to the internet and how is affecting people in the real world? Kelly Hills tells us about her own personal story and how life online is a bit more complicated than you might expect.
The problem I see with Nicolelis’ view of the future of neuroscience, which I discussed last time, is not that I find it unlikely that a good deal of his optimistic predictions will someday come to pass, it is that he spends no time at all talking about the darker potential of such technology.
Someone interviewing me for a magazine asked me what current technology tomorrow’s children would find obsolete. I almost answered “The Internet.” Then I decided to think about that answer a little bit because it’s pretty scary. Then I decided it’s true. Shortly, humans may find today’s wide open Internet as archaic as we now find phones that are wired to walls. Here’s why. There are three huge pressures on the internet as we know it today – the one where I can write this essay, post it on my website, and you can find it and read it. Whoever you are.
This is the second part in a short series of posts on predictive algorithms and the virtues of transparency. The series is working off some ideas in Tal Zarsky’s article “Transparent Predictions”. The series is written against the backdrop of the increasingly widespread use of data-mining and predictive algorithms and the concerns this has raised.
It would be nice to believe that the road to civility could be paved by following simple formulae, like Frank Bruni’s New Year’s exhortation, “Tweet less, read more”. Unfortunately, uncomplicated Op-Ed advice doesn’t translate into effective results in the messy real world.
The current level of general surveillance in society is incompatible with human rights. To recover our freedom and restore democracy, we must reduce surveillance to the point where it is possible for whistleblowers of all kinds to talk with journalists without being spotted. To do this reliably, we must reduce the surveillance capacity of the systems we use.
You've probably heard of a concept known as the Technological Singularity — a nebulous event that's supposed to happen in the not-too-distant future. Much of the uncertainty surrounding this possibility, however, has led to wild speculation, confusion, and outright denial. Here are the worst myths you've been told about the Singularity.
There’s a new “viral” video making the rounds. It’s a 15-minute pro gay-marriage film that interviews children about the concepts of prejudice, fairness and gay marriage. All the children in the video except one seem to think that basic principles of fairness should apply to men marrying men and women marrying women. However, throughout the video, one kid insists gay marriage “is just wrong.” When pressed for why this is so, the boy (who appears to be a five- or six-year-old) can provide no reason for his assertion.
As we learn more and more details regarding government spying, it seems more and more foolhardy to trust our security to third party businesses.The state requires information on its subjects to be effective. From the first census in Egypt more than 5000 years ago, states have sought personal information on their citizens, especially in tyrannical states, where informants and secret police gather information on any and all potentially subversive activities.
Big data generates big myths. To help society set realistic expectations, the right kind of skepticism is needed. Kate Crawford, Principal Researcher at Microsoft Research and Visiting Professor at MIT’s Center for Civic Media, does a fantastic job of explaining why folks are too optimistic about the promise of what big data can offer. She rightly argues that too much faith in it inclines us to misunderstand what data reflects, overestimate the political efficacy of information, and become insensitive to civil rights concerns.