IEET > GlobalDemocracySecurity > Vision > Contributors > Richard Loosemore > Technoprogressivism > Cyber > SciTech > Resilience
Don’t let the bastards get you from behind!
Richard Loosemore   Jan 5, 2011   Ethical Technology  

One day when I was a young teenager, living out in the countryside in the south of England, a dear old guy I knew drove past me when I was on a long solitary walk. He recognized me and pulled over to ask if I wanted a ride down to the village.

I normally would have said an automatic “no”, but since he was my parent’s boss, and I recognized him, I climbed in.

His first words were a cheerful but grand-uncle-ish warning: “Shouldn’t have been walking on that side of the road, y’know. Got to walk on the side facing the traffic. Don’t let the bastards get you from behind!”

image 1Knowing that he had a history as an Army Major, I could see his advice was a mixture of civilian and battlefield common sense. But also, the phrase struck me as a warning about life in general. The hazards in front of you are the big, obvious ones, and (if you care about such things at all, which most people don’t) you can spend your whole life looking for those hazards and planning avoidance strategies. Global warming. Genetically engineered zombie viruses. Grey goo. Other existential threats that don’t start with a “g”.

But then there are the really stupid, boring threats that don’t have any glamour at all. Like, we’re all just walking along the street thinking about how wonderful technology has become, as we talk to a friend on a portable telescreen with a memory the size of the Library of Congress, when BAM! a riot starts up in a poor ghetto on the other side of town. Apparently the differential between rich and poor got so huge that the number of people existing in a state of absolute squalor, with nothing to live for, reached a tipping point at which they no longer cared how brutally the police would react. Then, in a couple of weeks (do you know how long it took the barbarians to reach Rome?) every country in the “civilized” world is reduced to piles of burning rubble and pools of blood.

Stupid and pointless. A whimpering end to civilization. But what is the relative likelihood of that kind of thing happening, as compared to a slow decline due to global warning, or a robot rebellion? How many people are sure they understand the relative importance of these threats? And how many understand where the real solutions lie, as opposed to the fantasy solutions that just make us feel like we are doing something?

It pains me to hear so much talk of how technology is accelerating, and how far we have come in just the last decade. Sure, the advances have happened. Me, I believe in it, and I am doing my best to create more of it, and (most importantly) to create safer forms of it.

There are so many people who love the technology, but who have a staggeringly naive understanding of the forces at work in societies. No understanding of complex systems. No appetite for thinking about the stupid, boring ways in which the party could come to an end. No appreciation of the linkages between the politics they choose (often based on simplistic slogan-ideas like “I want government out of my wallet and out of my bedroom”) and the actual, on-the-streets consequences of their political choices.

The important things are not where you think they are.

Pay attention to the wrong things, and the bastards will get you from behind.

There is an interesting coda to that story about my ride down to the village. The “dear old guy” was Major Edward Baring, a cousin of the family that owned Baring Brothers Merchant Bank. A few months after that episode, my parents got a new job working for the other Barings, the ones who owned the bank. So, for the next few years, our family lived in a cottage situated on a fabulous country estate in Hampshire (my parents worked as gardener and housekeeper for the Barings, so the cottage came with the job and we got to use their five-thousand-acre park as our back yard). But then, many years later in February 1995, after being a pillar of the London financial community for 233 years, the Baring Brothers bank collapsed overnight when rampant internal negligence allowed a rogue trader to get out of control. BBMB went from perfect solvency to utter collapse in one weekend.


The point of my story is not to warn of bizarre hazards that nobody could have anticipated, but to warn about the things that are easy to see, but that are ignored for reasons of ... well, inconvenience, or because they are not sexy, or (and this is the main reason) because they involve some kind of politics and are therefore handled by the tribal part of the brain rather than the thinking part.

I don’t really want to list all the examples, but I do want to shine a light on a couple of notable ways in which we might be staring something in the face, but not seeing it.

One of these is politics-politics (in other words, government), while the other is the politics of the academy.

One obvious lesson from history is that you can discuss problems and invent solutions until the cows come home, but if the instruments of power are in the hands of people who don’t care to implement the solutions, nothing will happen. Now, sure enough, it is not easy to get power to the people who really do care. But, that said, I have watched smart people invent brilliant solutions with their left hand, while they use their right hand to promote their own tribal politics. They can think of the physical or biological world in objective terms, as a system, but they cannot see the political world objectively, as just another system. Instead, they attach themselves to a political philosophy developed and managed by their hindbrain. This, when the entire problem is actually governed not by our ability to invent solutions but by the political wall that stops solutions from being implemented.

This is kind of amazing, when you think about it. It touches such a raw nerve, to point out that people may have to adjust their political allegiances if they want to make the world safe for the future; that some people who would do anything to save that future cannot actually bring themselves to consider the possibility that adjustments might be necessary.

So much for politics with a big P.

The other-the academic politics-is in some ways more fascinating. And at least as potent.

Among all the technological dangers we face, the artificial intelligence problem is one that garners a lot of attention. In the short term we worry about semi-intelligent UAV drones that make their own decisions about whom to kill. In the longer term (is it really the longer term?) we worry about the possibility of an intelligence explosion, when artificial general intelligence (AGI) systems come online and go into an upward spiral of self-redesign until they reach superintelligence.

Artificial intelligence has a second facet, though: it is potentially a solution to many of the other problems. Assuming that it could be trusted not to be a danger itself, AI could be used to manage other systems, invent new ways to harvest energy, process waste, and so on. Pretty much all of the other existential threats could be ameliorated or solved by a completely safe form of AI.

So: big potential danger, and big potential benefit if the danger can be avoided. That ought to focus some minds. That ought to be enough to ensure that everyone cooperates to make something happen, wouldn’t you think?

Well, perhaps. But from my perspective as an AGI researcher with an acute interest in the safety of AGI, I see something rather different. It would take a long time to explain my perspective in all its technicolor detail, so forgive me if I just state my conclusion here, but ... the AI community is in a state where many of the participants are doing what they love, not what makes sense.

image 2 It goes something like this. Most AI researchers come from a mathematics background, so they love the formal elegance of powerful ideas, and they worship logic and proof. (And just in case you get the idea that I am saying this as a grumpy outsider, you should know that I grew up as a mathematical physicist who fell in love with computing, so I have felt this passion from the inside. Only later did I become a cognitive scientist.) The problem is, that there are good reasons to think that this approach may be the wrong way to actually build an AGI that works, and there are also reasons to suppose that a safe form of AGI cannot be built this way. Instead, we may have to embrace the ugly, empirical inelegance of psychology and complex systems.

Unfortunately, this ugly, empirical approach to AGI is anathema to many artificial intelligence researchers. So much so that they would challenge anyone to a fight to the death rather than allow it to come in on their watch.

I have to confess straight away that I am partisan about this: I am on the side of the rebels, not the Galactic Empire, so if you are going to make an objective assessment you cannot take my word alone. But-back to my personal opinion, now-if you do look into this, and if you do conclude that what I am saying is correct, you may come to realize that everything that is happening, or not happening in the field of AI is blocked by this internal paralysis. This triumph of academic politics over science.

So, just as the solution to problems like global warming have nothing to do with trivial things like finding a solution to global warming, but only have to do with finding a way to get a political blockage out of the way, the problem of AI could turn out to be entirely about politics as well.

When you hear talk of how hard it is to find ways to make AI systems safe, or how hard it is to make them intelligent enough to be useful, this probably has nothing to with anything you thought it did. It has to do with people, and their foibles and hang-ups. It has to do with individual researchers trying to carve out their fiefdoms and defend their patch of the intellectual turf.

(And again: don’t take my word for it, because I am biased. But consider it as a real possibility, and maybe think about looking into it).


I will finish by a slight return to the Baring Brothers Merchant Bank story.

You will remember that I said the bank disappeared in the course of one weekend, because of internal negligence that allowed a rogue trader to get out of control. This story is usually simplified in the popular press to make it look as if Nick Leeson was the rogue trader who single-handedly destroyed the bank, but the truth is far more instructive.

There was an oversight department in Leeson’s branch, designed to ensure that traders could not do the insanely foolish things that he actually did. The problem was that the bank was so riddled with negligence and complacency (people felt too embarrassed to ask questions about practices that were transparently wrong), that the person in charge of the oversight department was ... Nick Leeson himself.

So the problem with keeping a bank from collapsing is not about having the right oversight and auditing mechanisms, but rather the complacency and internal politics of the organization.

And the problem of stopping space shuttles from exploding just after launch (if I may be permitted a last-minute extra example) would be the complacency and internal politics that allows one voice of sanity to be overruled on the grounds that reality must not be allowed to interfere with schedules.

And the problem of finding solutions to global catastrophic risks is not so much in the solutions themselves, as in the politico-tribal allegiances of those who search for the solutions.

And the problem of building a safe and friendly form of artificial intelligence is not about the technical problem itself, it is about the academic obsessions and power-mongering of those who are both the researchers and their own oversight committee.

These are the things that will get you from behind.

Richard Loosemore is a professor in the Department of Mathematical and Physical Sciences at Wells College, Aurora, NY, USA. He graduated from University College London, and his background includes work in physics, artificial intelligence, cognitive science, software engineering, philosophy, parapsychology and archaeology.



COMMENTS

Thanks Richard for one of the more sophisticated essays on this site.  Sad that many will reject or ignore it for its “lack” of simple answers.

Take this with a grain of salt.

When I took LSD, I would often walk on the side of the road with cars at my back.
Because, if I walked on the side with cars facing me, I would be blinded by their headlights due to the extra sensitivity of my eyes, and thus not be able to judge distance accurately. So I made a habit of walking on the other side.

Also, “Welcome to the center of the information war”.

Hope that helps.

Here’s an example of a turf war between NASA and the Air Force:
“The Air Force didn’t care about research that much, and was content to let NASA have that aspect of space, but NASA feared competition from another government agency, and whined about it incessantly to Congress. They were particularly concerned that the USAF program, which was aggressive and designed for a rapid turnaround time between flights, would make NASA look bad. The Democratic Administration at the time was really sold on the idea of a peaceful civilian space agency to distinguish it from the military Soviet space program. (Though NASA was almost entirely staffed by ex-military types, and actual officers on loan from the various services, so it was Civilian in name only), and they felt that having a military space program would undercut NASA’s propaganda value, so, ultimately, the insisted the USAF shut their project down, and the X-20 was abandoned.
The irony is that if the DynaSoar *hadn’t* been cancelled, we probably never would have built the Space Shuttle. Why? Because the Space Shuttle has ended up being amazingly difficult, expensive, and dangerous to operate. Intended for economic access to space, it’s a flying coffin that’s killed 14 people, and which costs twice as much to operate as the ‘wasteful’ Saturn V, while only carrying about 1/3rd as much payload to orbit. It’s a turd that probably would have been avoided if NASA had the X-20 to look at for practical information about how orbital space planes work. But since they really had no infomation, we ended up spending the last 30 years learning about it the hard way.”

Ver insightful. Much appreciated.

As to the political issues you raise, which are truly central and yet too often underemphasized, I recommend John Gray (Straw Dogs) who observed that decisions about implementing emerging technologies will inevitably be made based on “competition among states, corporations and criminal groups”. Too often, many of us seem to visualize a wise, democratic forum convening; that hope is not evidenced in the history I’ve read.

Also, I believe we’re neighbors.

Very Interesting.

Another example of people’s prejudiced perspectives putting blinders over their eyes.  Refusing to accept the world for what it is, and insisting that it is how they think about.  Solving the problem they wish they had, rather than the one they do.

Cognitive Dissonance festers into hypocrisy as they become aware of the growing corruption within.
ie. Confessions of an Economic Hit Man - John Perkins
and more recently, Wendell Potter with Deadly Spin.

This exact same issue is at the heart of our Economic Crisis, our Health-Care Crisis, our Immigration Crisis slash-combination Drug War, and our ‘Business As Usual’ Wars, both at home and abroad.  Never forgetting that Business is War and we take no prisoners, (because then we’d have to feed them and we don’t even want to feed our own soldiers).

As long it’s ‘the law of the jungle’ out there, mankind can’t afford to mature.
Yet if not now, when?  What does it profit a man…

If Democracy is right for the country, why not the workplace?  Who needs tyrants anywhere?
ie Richard Wolff - Capitalism Hits the Fan - The Mondragon Collective

Back to the issue at hand - Artificial Intelligence:  I ask you, if we’re at war with ourselves…  should we really be replicating that fractured image?

We already have such a hard time with responsibility and accountability.
ie British Petroleum, Monsanto, Bechtel, IBM, GE, Coca-Cola, Nike, Slavery, Racism, Economic Bigotry, Sexism, The assassinations of JFK, RFK, MLK and that fat Beatle.

Not even mentioning the embarrassment and shame caused to us all, by the existence of McDonalds, Bilderbergs, Bohemians, Masons, Bozos and Dorks.

And Ironically, Personally, I say, Damn the Torpedoes! Smash the Cookware! All or Nothing! History favors the Bold!

And who doesn’t love a good show?

If the universe is filled with Billions of Planets - who cares if one more blows themselves up?  I think what we’re most afraid of facing is a simple fact that any insurance actuary will happily confirm…  We probably won’t outlive the planet.

We probably won’t perfect the immortality potion, or vaccine, or download, or upload, before you kick it.  Isn’t that what everyone’s waiting for now-a-days?  Hoping to be one of the lucky ones, who is still young when they sort out the cure for aging telomeres.

Are we even suitable models for ‘intelligence’?  Isn’t that the nagging thought that keeps us up at night, ethically?

Maybe we’re not capable of creating AI because we need to advance a bit more ourselves?  And the ‘Yes Men’ you mention, hamstrung by their prior obligations…  Well, when have the conformists ever really accomplished anything interesting?

Can anyone show you the origin of their thoughts, or of your own?

We might be able, with even current technology, to invent some fantastic Goldberg machines that may appear to anticipate our every mood.  Automatic page turners and door openers, so that as our obese, scooter-bound forms even gesture toward our next desire, our Ai is right there, guiding us along.  For some of us it is clear, a return to the nanny-state of childhood is our fondest wish.

But I’d say its very long odds on our present-selves developing the AI that can rescue us from our own worst enemies.

To quote from http://code.google.com/p/mindforth/wiki/JsAiManual
“When Alfred Nobel invented dynamite, he was creating something useful that could also be used for evil. Likewise our invention of artificial intelligence is not necessarily good or not necessarily evil; it is only dangerous. It is the problem of human society, not us mind-designers and AI coders, to decide whether or not all AI research should be brought to a screeching halt or severely regulated. It may not be possible in advance to guarantee the creation of a Friendly AI. Once they become superintelligent, AI Minds can not be held back by us dimwitted humans. If you like to live dangerously, read on. Otherwise sound the alarm.”

As Richard Loosemore said academic people can be very logical and wise but in the political area tribal. Of course politics is what governs human society or is it. I often think peoples actions are even against there own individual wills. Or when they vote a certain way they know they would not have some time before but are tricked by the media which is controlled by ratings/money and the government even in democratic societies, to whip up a threat like weapons of mass destruction.
I also sometimes wonder if governments and human society is controlled from outside. I know this sounds weird but most governments know the threat of global warming and so do quite a few of the governed. The global warming deniers seem to be large and in the majority or is it just a loud raging noise that makes them seem so large, that makes the majority want to follow the pack.
Is human society being lead to its own destruction from forces outside the planet?
That might seem opt out of responsibility fantasy. I know that in humans there are very primal drives but are these been brought to be the dominating factor that is controlling things.

YOUR COMMENT Login or Register to post a comment.

Next entry: The Brain and The Law

Previous entry: The First Decade of the Future is Behind Us