Don't let the bastards get you from behind!
Richard Loosemore
2011-01-05 00:00:00

I normally would have said an automatic "no", but since he was my parent's boss, and I recognized him, I climbed in.

His first words were a cheerful but grand-uncle-ish warning: "Shouldn't have been walking on that side of the road, y'know. Got to walk on the side facing the traffic. Don't let the bastards get you from behind!"

image 1Knowing that he had a history as an Army Major, I could see his advice was a mixture of civilian and battlefield common sense. But also, the phrase struck me as a warning about life in general. The hazards in front of you are the big, obvious ones, and (if you care about such things at all, which most people don't) you can spend your whole life looking for those hazards and planning avoidance strategies. Global warming. Genetically engineered zombie viruses. Grey goo. Other existential threats that don't start with a "g".

But then there are the really stupid, boring threats that don't have any glamour at all. Like, we're all just walking along the street thinking about how wonderful technology has become, as we talk to a friend on a portable telescreen with a memory the size of the Library of Congress, when BAM! a riot starts up in a poor ghetto on the other side of town. Apparently the differential between rich and poor got so huge that the number of people existing in a state of absolute squalor, with nothing to live for, reached a tipping point at which they no longer cared how brutally the police would react. Then, in a couple of weeks (do you know how long it took the barbarians to reach Rome?) every country in the "civilized" world is reduced to piles of burning rubble and pools of blood.

Stupid and pointless. A whimpering end to civilization. But what is the relative likelihood of that kind of thing happening, as compared to a slow decline due to global warning, or a robot rebellion? How many people are sure they understand the relative importance of these threats? And how many understand where the real solutions lie, as opposed to the fantasy solutions that just make us feel like we are doing something?

It pains me to hear so much talk of how technology is accelerating, and how far we have come in just the last decade. Sure, the advances have happened. Me, I believe in it, and I am doing my best to create more of it, and (most importantly) to create safer forms of it.

There are so many people who love the technology, but who have a staggeringly naive understanding of the forces at work in societies. No understanding of complex systems. No appetite for thinking about the stupid, boring ways in which the party could come to an end. No appreciation of the linkages between the politics they choose (often based on simplistic slogan-ideas like "I want government out of my wallet and out of my bedroom") and the actual, on-the-streets consequences of their political choices.

The important things are not where you think they are.

Pay attention to the wrong things, and the bastards will get you from behind.

There is an interesting coda to that story about my ride down to the village. The "dear old guy" was Major Edward Baring, a cousin of the family that owned Baring Brothers Merchant Bank. A few months after that episode, my parents got a new job working for the other Barings, the ones who owned the bank. So, for the next few years, our family lived in a cottage situated on a fabulous country estate in Hampshire (my parents worked as gardener and housekeeper for the Barings, so the cottage came with the job and we got to use their five-thousand-acre park as our back yard). But then, many years later in February 1995, after being a pillar of the London financial community for 233 years, the Baring Brothers bank collapsed overnight when rampant internal negligence allowed a rogue trader to get out of control. BBMB went from perfect solvency to utter collapse in one weekend.




The point of my story is not to warn of bizarre hazards that nobody could have anticipated, but to warn about the things that are easy to see, but that are ignored for reasons of ... well, inconvenience, or because they are not sexy, or (and this is the main reason) because they involve some kind of politics and are therefore handled by the tribal part of the brain rather than the thinking part.

I don't really want to list all the examples, but I do want to shine a light on a couple of notable ways in which we might be staring something in the face, but not seeing it.

One of these is politics-politics (in other words, government), while the other is the politics of the academy.

One obvious lesson from history is that you can discuss problems and invent solutions until the cows come home, but if the instruments of power are in the hands of people who don't care to implement the solutions, nothing will happen. Now, sure enough, it is not easy to get power to the people who really do care. But, that said, I have watched smart people invent brilliant solutions with their left hand, while they use their right hand to promote their own tribal politics. They can think of the physical or biological world in objective terms, as a system, but they cannot see the political world objectively, as just another system. Instead, they attach themselves to a political philosophy developed and managed by their hindbrain. This, when the entire problem is actually governed not by our ability to invent solutions but by the political wall that stops solutions from being implemented.

This is kind of amazing, when you think about it. It touches such a raw nerve, to point out that people may have to adjust their political allegiances if they want to make the world safe for the future; that some people who would do anything to save that future cannot actually bring themselves to consider the possibility that adjustments might be necessary.

So much for politics with a big P.

The other-the academic politics-is in some ways more fascinating. And at least as potent.

Among all the technological dangers we face, the artificial intelligence problem is one that garners a lot of attention. In the short term we worry about semi-intelligent UAV drones that make their own decisions about whom to kill. In the longer term (is it really the longer term?) we worry about the possibility of an intelligence explosion, when artificial general intelligence (AGI) systems come online and go into an upward spiral of self-redesign until they reach superintelligence.

Artificial intelligence has a second facet, though: it is potentially a solution to many of the other problems. Assuming that it could be trusted not to be a danger itself, AI could be used to manage other systems, invent new ways to harvest energy, process waste, and so on. Pretty much all of the other existential threats could be ameliorated or solved by a completely safe form of AI.

So: big potential danger, and big potential benefit if the danger can be avoided. That ought to focus some minds. That ought to be enough to ensure that everyone cooperates to make something happen, wouldn't you think?

Well, perhaps. But from my perspective as an AGI researcher with an acute interest in the safety of AGI, I see something rather different. It would take a long time to explain my perspective in all its technicolor detail, so forgive me if I just state my conclusion here, but ... the AI community is in a state where many of the participants are doing what they love, not what makes sense.

image 2 It goes something like this. Most AI researchers come from a mathematics background, so they love the formal elegance of powerful ideas, and they worship logic and proof. (And just in case you get the idea that I am saying this as a grumpy outsider, you should know that I grew up as a mathematical physicist who fell in love with computing, so I have felt this passion from the inside. Only later did I become a cognitive scientist.) The problem is, that there are good reasons to think that this approach may be the wrong way to actually build an AGI that works, and there are also reasons to suppose that a safe form of AGI cannot be built this way. Instead, we may have to embrace the ugly, empirical inelegance of psychology and complex systems.

Unfortunately, this ugly, empirical approach to AGI is anathema to many artificial intelligence researchers. So much so that they would challenge anyone to a fight to the death rather than allow it to come in on their watch.

I have to confess straight away that I am partisan about this: I am on the side of the rebels, not the Galactic Empire, so if you are going to make an objective assessment you cannot take my word alone. But-back to my personal opinion, now-if you do look into this, and if you do conclude that what I am saying is correct, you may come to realize that everything that is happening, or not happening in the field of AI is blocked by this internal paralysis. This triumph of academic politics over science.

So, just as the solution to problems like global warming have nothing to do with trivial things like finding a solution to global warming, but only have to do with finding a way to get a political blockage out of the way, the problem of AI could turn out to be entirely about politics as well.

When you hear talk of how hard it is to find ways to make AI systems safe, or how hard it is to make them intelligent enough to be useful, this probably has nothing to with anything you thought it did. It has to do with people, and their foibles and hang-ups. It has to do with individual researchers trying to carve out their fiefdoms and defend their patch of the intellectual turf.

(And again: don't take my word for it, because I am biased. But consider it as a real possibility, and maybe think about looking into it).




I will finish by a slight return to the Baring Brothers Merchant Bank story.

You will remember that I said the bank disappeared in the course of one weekend, because of internal negligence that allowed a rogue trader to get out of control. This story is usually simplified in the popular press to make it look as if Nick Leeson was the rogue trader who single-handedly destroyed the bank, but the truth is far more instructive.

There was an oversight department in Leeson's branch, designed to ensure that traders could not do the insanely foolish things that he actually did. The problem was that the bank was so riddled with negligence and complacency (people felt too embarrassed to ask questions about practices that were transparently wrong), that the person in charge of the oversight department was ... Nick Leeson himself.

So the problem with keeping a bank from collapsing is not about having the right oversight and auditing mechanisms, but rather the complacency and internal politics of the organization.

And the problem of stopping space shuttles from exploding just after launch (if I may be permitted a last-minute extra example) would be the complacency and internal politics that allows one voice of sanity to be overruled on the grounds that reality must not be allowed to interfere with schedules.

And the problem of finding solutions to global catastrophic risks is not so much in the solutions themselves, as in the politico-tribal allegiances of those who search for the solutions.

And the problem of building a safe and friendly form of artificial intelligence is not about the technical problem itself, it is about the academic obsessions and power-mongering of those who are both the researchers and their own oversight committee.

These are the things that will get you from behind.