What, Me Worry? - I Don’t Share Most Concerns About Artificial Intelligence
Lawrence Krauss
2015-05-28 00:00:00
URL

First, let's make one thing clear. Even with the exponential growth in computer storage and processing power over the past 40 years, thinking computers will require a digital architecture that bears little resemblance to current computers, nor are they likely to become competitive with consciousness in the near term. A simple physics thought experiment supports this claim:

Given current power consumption by electronic computers, a computer with the storage and processing capability of the human mind would require in excess of 10 Terawatts of power, within a factor of two of the current power consumption of all of humanity. However, the human brain uses about 10 watts of power. This means a mismatch of a factor of 1012, or a million million. Over the past decade the doubling time for Megaflops/watt has been about 3 years. Even assuming Moore's Law continues unabated, this means it will take about 40 doubling times, or about 120 years, to reach a comparable power dissipation. Moreover, each doubling in efficiency requires a relatively radical change in technology, and it is extremely unlikely that 40 such doublings could be achieved without essentially changing the way computers compute.



Ignoring for a moment the logistical challenges, I imagine no other impediment in principle to developing a truly self-aware machine. Before this, machine decision-making will take an ever more important role in our lives. Some people see this as a concern, but it has already been happening for decades. Starting perhaps with the rudimentary computers called elevators, which determine how and when we will get to our apartments, we have allowed machines to autonomously guide us. We fly each week on airplanes that are guided by autopilot, our cars make decisions about when they should be serviced or when tires should be filled, and fully self-driving cars are probably around the corner.

For many, if not most, relatively automatic tasks, machines are clearly much better decision-makers than humans, and we should rejoice that they have the potential to make everyday activities safer and more efficient. In doing so we have not lost control because we create the conditions and initial algorithms that determine the decision-making. I envisage the human-computer interface as like having a helpful partner, and the more intelligent machines become the more helpful they can be partners.

Any partnership requires some level of trust and loss of control, but if the benefits often outweigh the losses, we preserve the partnership. If they don't, we sever it. I see no difference if the partner is a human or a machine.

One area where we may have to be particularly cautious about partnerships involves the command and control infrastructure in modern warfare. Because we have the capability to destroy much of human life on this planet, it seems worrisome to imagine that intelligent machines might one day control the decision-making apparatus that leads to pushing the big red button, or even launching a less catastrophic attack. I think this is because when it comes to decision-making we often rely on intuition and interpersonal communication as much as rational analysis—the Cuban missile crisis is a good example—and we assume intelligent machines will not have these capabilities.



However, intuition is the product of experience and communication is, in the modern world, not restricted to telephones or face-to-face conversations. Once again, intelligent design of systems with numerous redundancies and safeguards built suggest to me that machine decision-making, even in the case of violent hostilities is not necessarily worse than decision-making by humans.

So much for possible worries. Let me end with what I think is the most exciting scientific aspect of machine intelligence. Machines currently help us do most of our science, by calculating for us. Beyond simple numeric programming, Most graduate students in physics now depend on Mathematica, which does most of the symbolic algebraic manipulation that we used to do ourselves when I was a student. But this just scratches the surface.

I am interested in what machines will focus on when they get to choose the questions as well as the answers. What questions will they choose? What will they find interesting? And will they do physics the same way we do? Surely quantum computers, if they ever become practical, will have a much better "intuitive" understanding of quantum phenomena than we will. Will they be able to make much faster progress unravelling the fundamental laws of nature? When will the first machine win a Nobel Prize? I suspect, as always, that the most interesting questions are the ones we haven't yet thought of.