Is It Time to Give Up on the Singularity?
George Dvorsky
2014-06-11 00:00:00
URL



A recent article by Erik Sofge in Popular Science really got my hackles up. Sofge argued that the Singularity is nothing more than a science fiction-infused faith-based initiative — a claim I take great exception to given (1) the predictive power and importance of speculative fiction, and (2) the very real possibility of our technologies escaping the confines of our comprehension and control. In his article, Sofge described futurist Ramez Naam as a "Singularity skeptic," which prompted me to contact him and have a debate. Here's how our conversation unfolded.



George: You were recently described by Sofge as a "Singularity skeptic," which for me came as a bit of surprise given your amazing track record as a futurist and scifi novelist. You've speculated about such things as interconnected hive-minds and the uploading of human conscious to a computer — but you draw the line, it would seem, at super artificial intelligence (SAI). Now, I'm absolutely convinced that we'll eventually develop a machine superintelligence with capacities that exceed our own by an order of magnitude — leading to the Technological Singularity, or so-called Intelligence Explosion (my preferred term). But if I understand your objections correctly, you're suggesting that the pending preponderance of highly specialized generalized AI will never amount to a superintelligence — and that our AI track record to date proves this. I think it's important that you clarify and elaborate upon this, not least because you're denying something that many well-respected thinkers and AI theorists describe as an existential risk. Also, I'm also hoping you can provide your own definition of the Singularity just to ensure that we're talking about the same thing.



Mez: Hey, George. Great to be in dialogue. To be clear, I 100% believe that it's possible, in principle, to create smarter-than-human machine intelligence, either by designing AIs or by uploading human minds to computers. And you're right, I do talk about some of this in Nexus and my other novels. That said, I think it's tremendously harder and further away than the most enthusiastic proponents currently believe. I talked about this at Charlie Stross's blog, in a piece called "The Singularity is Further Than it Appears." Most work in AI has nothing to do with building minds capable of general reasoning. And uploading human brains still has huge unknowns.



My other issue is with the word 'Singularity'. You asked me to define it. Well, a 'Singularity' in mathematics is a divide-by-zero moment, when the value goes from some finite number to infinity in an eye blink. In physics, it's a breakdown in our mathematical models at a black hole. Smarter-than-human AI would be very cool. It would change our world a lot. I don't think it deserves a word anywhere near as grandiose as 'Singularity'. It wouldn't be a divide-by-zero. The graph wouldn't suddenly go to infinity. Being twice as smart as a human doesn't suddenly mean you can make yourself infinitely smart.



George: Okay, your stance on this is now considerably clearer to me, though I still have some concerns. In the context of a Singularity or the advent of a greater-than-human intelligence, we're not really discussing human-like general reasoning, nor are we talking about artificial sentience. These two concepts often get lumped into the debate and for reasons that aren't completely evident to me. It will not take a super AGI to rework the fabric of the human condition, to convert the planet into a means for its own ends, or to make a colossal mistake. In fact, the real threat (or promise, depending on your Singularitarian persuasion) from an ASI comes from the very thing you believe is possible: narrow — but powerful — AI. We won't need to reverse engineer the human brain to get there, so it's not as difficult or far away in the future as you may believe.





(Algol | Shutterstock)



For example, imagine a world decades from now when there's an exploding digital ecology of expert systems (or centuries from now — whatever — timelines don't really matter and it's not productive to get bogged down by that discussion; the concern here is technological feasibility). Each AI will go about its business in the way intended, or in unexpected ways if it's poorly programmed. In some cases, this goal could be its own self-improvement or the creation of a new AI architecture superior to itself. At the same time, it could be self-replicating and competing for resources, all the while having to deal with other intelligent systems with potentially disparate goals. This could be catastrophic for humans and other AIs, particularly if self-preservation is part of its programming.



As for AIs suddenly going from zero to infinity, that's not really the issue. No one is claiming that. Rather, the thinking is that it'll go from zero to a value that's pretty damned high in fairly short order — a so-called fast take-off event. And again, we're not talking about something being "twice as smart as a human" or even a hundred times as smart as a human. The issue is one of an exploding population of super-expert systems (or even super-algorithms) that have acquired a set of skills that exceed human capacities over a broad range of domains, and while operating at speeds that defy human comprehension and at scales beyond our ability to predict and control.



Mez: Let me take this from the bottom of your comment to the top.



First, If we're not talking about things going to infinity, we should stop using the word 'Singularity'! It means something — it means a divide by zero! It means a place where our mathematics and physics break down! It doesn't just mean super smart or super-dangerous algorithms, or even a very weird future.





When we use an immensely powerful term like 'Singularity' we then cause a couple problems. The first is a bit evident already in our conversation — different people use the same word and mean very different things by it. So there's then a lot of confusion about what anyone is even talking about. The second problem is that the whole thing has an aura of the quasi-religious or eschatological about it. The 'Singularity' gets waved around as the answer to things, good or bad. Worried about your health? The Singularity will take care of it. Climate change? The Singularity will deal with that. Poverty? Don't worry, there's a Singularity coming.



Oh, come on!



Now, I'm not ascribing any of those beliefs to you, but when we use this very vague, very loaded term associated with a breakdown in our models, a term that lots of people have associated lots of different things with, then people start to do some very sloppy and even magical reasoning. I don't think that's super helpful. So maybe we should use more specific words to say what we mean.



Okay — on to more substance. Fast take-offs? Usually people mean that in cases where AIs or algorithms are improving themselves. But generally, nature is against them. We're making exponential progress in computing power. But most interesting algorithmic problems chew up CPU cycles. There's the whole class of NP-hard problems, where exponentially more computing power gets you something like linear progress. By analogy, in the real world we see signs that important research areas like pharmaceutical drug development are seeing exponentially slowing progress in an "Eroom's Law" (the opposite of Moore's Law).



Finally, I do think we have to worry about algorithms and technology running out of control. But that worry exists already today. We have to deal with spam and botnets and malware and high-frequency trading. We already have these semi-autonomous super-human intelligences called corporations that are working towards their own immorality, growth, and amplification of their abilities. There's plenty to worry about. We live in that future world already. But we also have — or can develop — tools to deal with those things. We build firewalls and filters and antivirus and so on. I get less spam today than I did ten years ago. The corporations don't seem to have gone through a fast takeoff, and in fact, they keep giving me nifty new capabilities I like. And quite a few of the forecasts of algorithms running roughshod over our world are just wrong. For instance, high-frequency trading seems to be dying.



George: You are absolutely right when you say that the term "Singularity" has lost all meaning — not that it had a concise definition to begin with. My own understanding of the term was that it was borrowed from cosmology to describe an "event horizon" — the point beyond which we cannot see into a black hole because light can no longer escape. But in this context, it's a social event horizon — a blind spot in our predictive thinking caused by our inability to grasp the potentially sweeping implications of greater-than-human machine intelligence. So long as we stick to that (constrained) definition, I think we're okay.





But as you pointed out, the word has acquired so many definitions and associated baggage that it has been linked to pseudo-futurism — which is extremely unfortunate given that it's made Singularity-denialism fashionable.



Personally, I think the time is right to retire the term. It's not very useful; it describes what we don't know rather than what we do know — and a picture of the future is slowly starting to come into focus. And you're right — we should start adopting new terms that are more descriptive, perhaps something like "Machine Intelligence Explosion" or "Superintelligent Digital Ecologies," and so on.



In regards to the fast-takeoff potential, I'm still on the fence about that — but I don't necessarily believe that crazy-amounts of processing power is required for an SAI to inflict considerable damage fairly quickly once a certain threshold is reached. Moreover, massive-parallelism could make up for any deficiencies in processing power.



I'm glad you brought up the issue of out-of-control technology being an issue today. Regrettably, countermeasures, like firewalls, filters, and anti-virus apps, are unlikely to thwart an SAI from doing its work. All these things can be hacked, reverse-engineered, and co-opted. The US/Israeli-built STUXNET computer worm is a good example of what awaits. And indeed, as our infrastructure becomes increasingly brittle — particularly as we build the Internet of Things — the potential for a catastrophe steadily increases. As for high-frequency stock trading being on the wane, that may be so, but it's a safe bet that an AI arms race is set to begin between the world's military powers. Additionally, AI is about to become a massive multi-billion-dollar global industry, where competitive forces will result in increasingly powerful algorithms and AI.





It's too early to know what needs to be done to prevent a disaster and to help us control the developmental trajectory of AI. Some theorists have proposed that we develop a "seed AI" from which all other advanced AI are sprung. This seed would contain all the requisite programming to prevent an AI from straying off course so that it remains safe and friendly. But the challenges in developing such a seed are monumental, as would be the task of preventing other developers from building their AI off-seed.



Mez: I love it. More specific terms!



I agree with you that as our technology gets more powerful, the risks from it grow — either from cascading failures, or sabotage, or out-of-control scenarios. I think you're right about an AI and drone arms race in the military, for example. Right now drones are piloted by humans, remotely. But those radio signals are too easy to jam. And humans have very very slow reflexes. At some point, the country that deploys autonomous drones will have an edge on the battlefield, so it's evolutionarily selected for. Then again, what are air-to-air missiles but a kind of limited semi-autonomous drone?



Your point about the Internet of Things is a good one too. There's a lot of benefit to be had there. We can save energy, deploy sensors to monitor the environment, monitor our health, improve transportation… but you're right that having all these things wired up also means that devices that weren't vulnerable once are now. And so people are asking: How can we possibly make this stuff secure?



Maybe a better question is how we can increase resilience. Here's an analogous story. A lot of people are worried that incredible exponential plunge in the cost of genomics is going to enable terrorists to create dangerous bio-weapons. Well, there are a lot of reasons it's actually really hard to make a good bio-weapon. Tinker with a pathogen and the overwhelming majority of the time you make it less effective as a weapon instead of more. But more to the point, consider that the rapid drop in sequencing costs also means that now, doctors in the field can quickly sequence new viruses they encounter. Couple that with sensors and the global net. Couple that with Craig Venter's idea of being able to print vaccines on the spot. That's technology building resilience. And I think that analogy applies to AI, to cyber-terrorism, to nanotechnology, and to most other futuristic scenarios you can come up with. The same technology that creates the risk can also — if it's distributed right — create a network that's resilient to the risk.



It's really interesting that we've drifted into talking about risks of technology, actually! And I'm glad. Because so often there's a very naïve optimism that technological progress will bring only solutions. That's not the case. New technologies have made our lives better, but they've often brought some sort of potential or actual problems, too. That's one of the themes of my novels, is how to deal with that mixed bag.



But I want to close by just saying that, despite not being big on the whole idea of a 'Singularity', I am wildly optimistic about the future. I do think there are going to be problems. I also think that, on balance, we're going to see huge benefits for humanity. We can see already. Poverty is plunging around the world. Cell phones — this technology that was sci-fi a few years ago — are now in the hands of almost everyone around the planet. They're going to get billions of people in Africa and Asia onto the internet in the next decade. Self-driving cars could cut down on the more than one million people killed each year in car accidents and the hundreds of billions of hours a year people spend driving. I think we're going to make huge progress against disease and disability in the coming decades. And I'm optimistic that exponential progress in solar power and in energy storage is going to eventually turn the corner on climate change.



It may not be a Singularity, and it's going to have its own share of new and weird problems, but I think the future's going to be, in most ways, a whole lot better than the past.



Top image: Waterfall City by Theo, via Concept Ships.


Follow me on Twitter: @dvorsky


This article was originally published on io9.com