Waiting for the Great Leap…Forward?
J. Hughes
2007-11-02 00:00:00
URL




An audio version of the talk is available at the Singularity Institute website




Abstract: Sentient, self-willed, greater-than-human machine minds are very likely in the coming fifty years. But to ensure that they don't threaten the welfare of the rest of the minds on the planet a number of steps need to be taken. First, given their radically different architecture and origins, developing software capacity for recognizing and relating to, perhpas having empathy for, human sentience should be a design goal, even if machine minds are likely to evolve beyond human perspectives and emotional traits. Second, building on the global networks established to identify and respond to computer viruses, governments and cyber-security firms need to develop detectors for and counter-measures for self-willed machine intelligence that may emerge, evolve, or be accidentally or maliciously released. Those detectors and counter-measures may or may not involve machine minds as well. Third, human beings should aggressively pursue cognitive enhancement and cyber-augmentation in order to give them a competitive chance against machine minds, economically and in the event of conflict. Fourth, since machine intelligence, self-willed or zombie, is likely to displace the need for most human occupations by the middle of the century, industrialized countries will need to renegotiate the relationship between education, work, income, and retirement, extracting a general social wage from robotic productivity to lift all boats, not just those of the shrinking group of workers and owners of capital. Finally, in order to ensure that we do not re-capitulate slavery, we will need to be much clearer about what kinds of minds, organic and machine, have what kinds of responsibilities and are owed which kinds of rights. Machine minds with a capacity to understand and obey the obligations of a democratic polity should be granted the rights to own property, vote and so on. Minds wishing to exercise capacities as dangerous as weapons or motor vehicles, should be licensed to do so, while even more dangerous capacities (AI equivalents of bombs) will need to be restricted to control by, or be integrated into the functioning of, accountable democratic governance.




It's been a great conference. I try to keep Zen mind about these things, so that if only five people show up, I don't get disappointed. But this has been great, not only in terms of the numbers but also the quality and diversity. I want to thank the organizers for organizing it.

My background is bioethics, politics, public policy, that kind of stuff. Apparently some of you hate that, I've just heard. But, I'm going to talk to that and I may sound a little bit like Darth Vader but we can have discussion afterwards. My goal in the last couple years of my work with the transhumanist community and what we call the technoprogressive community, kind of lefty transhumanists and people like that, is to try to build a bridge to mainstream public policy concerns so that we can come in from the cold, from the futurist fringes, and begin to actually have an impact in public policy.

hughes_ss_slide_2x4440x.png

By the way, the opening slide here has Bender saying, "Hasta la vista, meatbag!" And I chose that because I thought that was a dystopian version of the Singularity, and then this AI is the nice little kid from A.I., and he's representing the utopia. But then I realized, what's the end of A.I.? It's when there's no more humans and it's all robots. And the Bender universe is one where robots can be really awful, like Bender, but there's a mix of humans and robots. There's probably some lesson there that I came up with. Let's explore that later.

hughes_ss_slide_3x4440x.png

My assumptions for the talk are first that AGI is likely. I probably started to believe this when I was twelve. And I'm no super genius, and I haven't come up with any super duper calculations about accelerating change to validate it. I just believe it. I think many of us believe it for not entirely rational reasons, but one of the reasons that I at least attribute post-facto to rationalize it is that I see greater-than-human intelligence around me every single day. I'm a sociologist, and we sociologists look at organizations as examples of greater-than-human intelligence. As organizations of information that have memory, that have intent, that have meta-cognition, that create boundaries, that reproduce themselves, that marry, that have affairs, that divide and mutate, organizations actually are meta-human intelligence of one sort or another. They may be squalling baby-level intelligence. They may be cockroach-level intelligence. They may not be what we are talking about with AGI, but they are a form of meta-level intelligence.

I think that perspective has shaped the way that I approach some of these questions. It is a theory of complexity kind of perspective. Actually, I just met Peter Russell for the very first time. He wrote a book back in the 1980's called The Global Brain, which was very influential with me, and argued that the globe, all of us communicating through some kind of as yet not created information architecture, would create a kind of meta-brain, which each of us were a neuron in that meta-process. Vernor Vinge says in his classic Singularity essay that we need to look in network and interface research, that there is something as profound and potentially wild as artificial intelligence in the computer-to-human communication networks that we've built. You can see this in Greg Stock's Metaman book and John Smart's work on this topic. So, when we began to have something like Google, which is something that I rely on all the time to augment my own intelligence, but which is actually the collective summation of billions of people's collective intelligence put through computer algorithms and then fed back into each of our brains, that begins to suggest to me that we may not be entirely correct to be looking for a designed intelligence, as Steve Jurvetson was saying, in a box someplace that someone's going to come up with.


hughes_ss_slide_4x4440x.png

Now, I also have the assumption that AGI is probably very dangerous. You may know this robot over here from Battlestar Gallactica, a very, very dangerous robot. And then Colossus, one of the classic supercomputers that wakes up and takes over the world. He thinks he knows better than we do how to run our world - an idea that I'm not entirely happy with. I believe robots can be very, very dangerous, both because they might be able to magnify the malicious and stupid intents of the humans who designed them and who might control them, but also because if they were in fact self-willed, that the self-willed intents that they might develop might be contrary to our own.


hughes_ss_slide_5x4440x.png

I think that on those first two principles, that AGI is possible and that it's dangerous, I agree with most of the people who would call themselves Singularitarians. I think that where we begin to diverge is in the notion that AGI is going to be radically alien. And I say this partly because the only pets that my family has ever owned are a gecko and a cat. We adopted a gecko about ten years ago and I have never in those entire ten years ever felt like I had any kind of communication with that gecko. I share 60% of my DNA with that gecko, we feed it everyday, and when you put that gecko in your hand, I could be a mountain to that gecko. I could be a tree it's climbing up, just a tree that happens to move around and pet it on the back. If it's that impossible to establish empathy and communication with a neural architecture that I share so much of the evolutionary tree with, then I think there is a similar kind of gap between our ability to imagine that even if we attempt to reverse engineer the brain and put it in a box, whatever comes out of that box is going to be something that is going to, for instance, as Sam Adams says, even have our time sense.

If you remember that classic Star Trek episode where there are people on a planet that zoop themselves up, so that when they got on the Star Trek Enterprise everyone was just standing around like a statue. You couldn't even see the people because they were moving so fast. And I think our existence, and the existence of AI's, might be like that. They might try to communicate with us and they may start by trying to communicate with the highest form of intelligence and organization that they see, which might be the New York Stock Exchange or the bacterial colonies in our gut, or ant colonies in our back yard or something.


hughes_ss_slide_6x4440x.png

So, I think that the Friendly AI project, which is embodied by the Singularity Institute for Artificial Intelligence - although it's beginning to (I'm very happy to see) branch out from that - is a good project. But I think that the principle thing that we will learn from that project is how to make friendly human beings. I've written about this in Citizen Cyborg. I believe it's not just a matter of robots with superintelligence should be friendly, I think human beings with superintelligence should also be friendly. And we need to start thinking about how people with vast amounts of power are accountable and friendly and not sociopathic. We may learn that from doing things like affective computing, from trying to reverse engineer the social and emotional parts of human brains in computers, and we may also learn things about human ethics.

Wendell Wallach talked about utilitarianism and deontology, trying to code different ethical codes into computers. You may have heard that humans have some innate evolutionary psychology, moral intuitions, and we can test those. One of the tests for those is to say, you see a train coming down the tracks, it's going to hit five kids, if you push a button, you can make it go onto another track where it's just going to hit one person. Would you push the button? 90% of people say yes, I'd push the button, save five kids, let it hit one person. The second scenario is, you're standing on a bridge above a train track, you're standing next to a very fat man. You see a train approaching, it's going to hit five kids, you realize that if you push the fat man on the tracks his body will stop the train. Would you push the fat man on the tracks? 90% of people say no. Well, why is that? That's because we have this monkey brain intuition that it's not a good idea to push people onto railroad tracks. Maybe to push a button, but not to push people on directly. Well, it doesn't make any sense from a utilitarian point of view, and we do a lot of training with soldiers to overcome that monkey brain feeling so that they send their five comrades out to save the lives of fifty other comrades.


But, we could design a perfectly utilitarian computer. We have found actually humans who have brain damage that eliminates their moral compunction about pushing the guy over. You ask him, "Would you push the guy over?" And he says, "Yeah, absolutely, I'd push the guy over." They are perfect utilitarian thinkers. Okay, so you might want a perfect utilitarian thinker as one of your soldiers on the field, but I wouldn't want to stand next to him on a bridge. I think that I wouldn't necessarily want a perfectly utilitarian Super AI out there. And the same thing with people who are purely deontological thinkers. "I will never lie under any circumstances." Well, you've seen many examples in literature and you've probably met some people who don't have that filter, who never tell a lie, and it's kind of annoying to be around.


hughes_ss_slide_7x4440x.png


The final problem with this attempt at Friendly AI is, I believe motivations can drift. They can be edited. This says, "Welcome to my annual ‘Meat and Greet,' succulent fleshlings." And she says, "You promise not to kill anyone today?" "Oh, I swear on Wikipedia's entry for ‘Honor.' This unit will be brought right back. I have to check on the potatoes." And then he goes and edits the Wikipedia entry for "Honor."


All of us, I think, have a certain difficulty now, but perhaps not in the future, editing our moral intuitions. Although some very interesting recent work has suggested that, for instance, liberals are responsive to certain moral intuitions and we, who are liberals, are able to ignore the moral intuitions that many conservatives have about cleanliness, spiritual pollution, the importance of hierarchy, and things like that. So we may already be overcoming our moral intuitions. And I think computers will be able to edit their moral intuitions as well.


hughes_ss_slide_8x4440x.png

Now, a part of my critique of the Singularitarian vision is also that because of my studies of religious and political history, I see an enormous amount of millennialist impact on this movement. And I don't mean to disparage it, because I have a utopian side, I have a millennialist side, and when I get up on the wrong side of the bed, I have an apocalyptic side. I find inspiration in all of those different parts of my personality. My utopian aspirations make me want to create a better world. My apocalyptic concerns about the future, my children's life, motivate me to work for things like arms control. So I don't disparage the concerns, the millennialist impulse. What I think we need to be self-critical of is the way in which millenialist impulses create cognitive biases.

One of the ones that I see quite clearly in Singularitarianism is a positive assumption that whatever comes out of all of this is going to be good. And then you press any Singularitarian, and you say, "You also acknowledge that there's an apocalyptic possibility?" "Yes, there's an apocalyptic possibility, but we still need to go full speed ahead, right to the Singularity." Well, that doesn't sound consistent, right? What are we going to do? Oh, well, what we're going to do is make sure that it's Friendly, so that when it comes out of the box it's going to be just like Jesus Christ, descending with a host of angels, and give us all mana from heaven. Everyone's going to have a cell phone and everything else that they want. I think that that's a millennialist cognitive bias. Your assumption is that because of the inevitability of the techno-rapture of the nerds that it's impossible that we're all going to get whacked by this thing.


So, the response that I also get from that is, atomic war is a real possibility. Is it apocalyptic to say that you are worried about atomic war? No, it's not apocalyptic. If you walk around all the time certain that atomic war is going to happen, that's a particular psycho-causation that's happened to you, because I don't think it's a realistic thing to walk around all the time certain that atomic war is going to happen. If you say, I'm worried about atomic war, like I'm worried about bio-terrorism or other kinds of existential threats, and I'm going to engage in these public policy actions to try to reduce those risks: arms control, diplomacy, the creation of international policing structures, and so on. Then that is not a millennialist bias. That is serious engagement. You're not engaging in magical thinking, that's there's only three guys in their shorts in some AI computer lab who are going to solve all the world's problems. You are engaging in a collective kind of intelligence.

hughes_ss_slide_9x4440x.png

Connecting to Steve Jurvetson's ideas, Steve is talking about the intentional evolutionary processes that might lead to AI. I'm worried about the evolutionary processes of Alife, loose in the information architecture of the world. Perhaps it will never get as smart as a human being. Perhaps it will just be as smart as cockroaches, but cockroaches are extremely annoying. Rats are extremely annoying. Feral dogs are extremely annoying. So however intelligent they get, they could be extremely annoying. Bacteria aren't that intelligent, but they are extremely annoying. Now, to give you an example of what I'm talking about, you may have heard in recent news that there's a Storm Worm bot loose. China apparently unleashed a denial-of-service attack against Russia. Chinese hackers have been accused of hacking into the Pentagon, and there's all kinds of cybersecurity concerns around the world.


hughes_ss_slide_10x4440x.png

Something that happened just this week. Reportage says the Storm Worm bot has colonized up to two million machines around the world from about 5-6,000 infected websites. The botnet has generated billions of messages, and reached a peak on August 22 when 56 million virus infected messages, 99% of them from the Storm Worm, were traveling across the internet. It's coded to detect IP addresses that run security scans on it and launch DOS attacks against them. So it communicates the identity of people who are trying to scan it for security to other members of the distributed network, and they all launch a denial-of-service attack against it, also against the anti-spam organizations that try to download it and individual researchers who have tried to download it. This is a quote from one of the security researchers. "If a researcher is repeatedly trying to pull down the Malware to examine it, the botnet knows you're a researcher and launches an attack against you. It's a new behavior for a botnet. It's acting in a defensive manner. It's a little scary, isn't it?" he says. He's the director of a university cybersecurity organization.


Gunter Ollman, the Director of Security at IBM's Internet Security Systems says, "This is the first time I've ever seen an automated response like this." Okay, so this is something that's happening this week. You know, we should be talking about this. These are people who are concerned about expert systems behavior that are threatening the communication networks of our world. And we should be talking about how do we communicate to them? We don't want to see a future which has, as Christine said, a top-down solution that comes up with something we don't like. So we need to get involved in the cyber security debate about the kind of future threats that we see.

hughes_ss_slide_11x4440x.png

There are very few people who are doing that. And this is one example that I found, but I couldn't find very many! I looked around for anybody in cybersecurity who were talking about the potential future threats from expert systems or AI or anything like that. Very few people are talking about it. It has to be global. It has to be national and local as well, but it has to be global. Another thing I didn't hear mentioned this weekend was the 1996 Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies, an agreement by 40 different countries that was signed in 1996, which restricts trading in cryptographic software expert systems and supercomputers. Now, we don't like trading controls, we don't like export limitations. You know, China should be able to buy anything they want from the United States, even if they're a totalitarian nation, etc. etc. etc. But there are things that are actually out there.



hughes_ss_slide_12x4440x.png

There are international agreements, laws and regulations controlling dangerous software, and we need to be talking to those folks, who are making those kinds of arrangements, about what the future threats might be. On the flipside, there is a positive dimension to international cooperation. Jamais was challenged about this and he gave the example of the Human Genome Project, which set aside 5% of its funding for ethical, legal, and social implications studies of genetic work. And the National Nanotechnology Initiative did the same. We can imagine an international cooperation around AI that funds conferences like this, you wouldn't even have to pay fifty bucks to get in if you got that kind of funding. We could all be talking about these kinds of projects internationally.



hughes_ss_slide_13x4440x.png

Now, one of the interesting conundrums therefore is that if we were to create an international police structure that tried to monitor for emergent or evolved AI expert systems in the informational architecture of the world committing cyber crime and so forth, they would probably also have to have AI as a part of their component, perhaps AI that is somehow tied through loyalty to their human institutions in some way that the others are not. But you get into that conundrum. How powerful an AI would we be able to have? If you have a human soldier and he's going to use an elephant on the battlefield. How well can that human soldier control an elephant on the battlefield? We may not be able to have powerful enough security structures to detect and control the potential AI threats out there.

hughes_ss_slide_14x4440x.png

So, one of the things that I think is integral, and it's been integral since the beginning of this discussion, is human intelligence augmentation. It was in the original essay by Vernor Vinge. He says there is AI, and then there's IA: intelligence amplification. Intelligence amplification has these various advantages. In fact, we know what human beings think like. We might be able to relate to them. Some of them are already friendly. Intelligence amplification would allow us potentially to stay one step ahead of this change. And I know that the Singularitarians then roll their eyes back in their head and say, "Oh my god, you do not understand anything about accelerating change because the AI's are going to bootstrap themselves into godhood in 3.5 seconds and turn all of the planet into computronium." Well, okay, maybe. If so, then we have to talk about how to prevent that, because I don't think that's such a great solution.

If not, one of the things that we might talk about is that keeping organic human origin brains as a part of the control mechanism for whatever information architectures we create, maybe that's a good idea. And maybe this lack of progress in creating AGI, and substantial progress in creating limited, non self-aware, non-sentient AI architectures that are able to do things that human beings decide they really want done, maybe that's a good thing. Maybe it's a good thing that you don't have to have an argument with your toaster in the morning to convince it to give you bread. "Because you're too fat. I don't think you need bread. You should be going low carb, I'm sorry." But what I often hear from the Singularitarians is "the code is pure." Human beings have genetic algorithms that are running, and they're all selfish. The code is pure. Even Vernor Vinge says, "A creature that was built de novo might possibly be a much more benign entity than one with a kernel based on fang and talon."


hughes_ss_slide_15x4440x.png

Well, maybe we could come up with pure code. But maybe we could purify the humans, as well. We are already figuring out what the roots of sociopathy, lack of empathy, violent tendencies and depression are in human beings. I see great promise in human augmentation of our moral ability. It's the focus of my next book, of course.


hughes_ss_slide_16x4440x.png

Of course, it's been expected for a hundred years that robots and automation would eventually get rid of the necessity of drudgery, and that all human beings would be able to live the life of the lotus eaters, upload ourselves into Second Life and just have a great time. It's not yet happened, but there are beginning to be signs that there is structural unemployment in various economies. Once you automate physical labor and then you automate intellectual labor, there's not a whole lot left for human beings to do, right? The argument of the economists has always been that the capitalist market will always create new jobs. Well, what are those new jobs that human beings might do better than advanced AI and advanced robotics? I don't think there are very many that you could come up with. Maybe sex work, psychiatry, writing poetry. But you don't have a great economy if you have sex work, psychiatry, and writing poetry. That's not a real human economy.

Marshall Brain estimates that we will have 50% unemployment by 2050. But then the National Academy of Sciences just two weeks ago published a paper that argues that we will have 60% unemployment as the result of automation by 2030. Now, that's probably way too optimistic. And one of the reasons it's way too optimistic is, as you saw in AI, if that started to happen you would have humans rounding up robots and getting them to kill each other in big coliseums. But there are advantages to a life of leisure if we can renegotiate the contract around work and income. If people can all feel that they have a stake in a roboticized future instead of just going to be screwed by the capitalist system that doesn't care that they're going to be left jobless in the street and their kids aren't going to get to go to college. We need to have a renegotiation of the contract around the work, income, and leisure in this society to prepare for the roboticized future.


hughes_ss_slide_17x4440x.png

We already talked about robotic rights and I believe that under certain specific conditions robots should have rights. But I'm glad we have also talked about if we shackle these robots with certain kinds of restrictions, shackle them to, for instance, always care about human interests before their own, is that really the model of how we respect other sentient minds that we want to be perpetuating? Because I have argued that we should be treating robot minds the same way we treat human minds, and then maybe we should go around and force everybody who is a racist to undergo mandatory reeducation and not be racist anymore. And then you get into this question of thought police and cognitive freedom, and so forth. So, I would recommend to you that you read Nick Bostrom's essay on the principles for ethically creating artificial minds. He basically is recapitulating what we will probably have to do for kids as well, once we have control over the minds of our kids. (I hope it's soon, I have a 12 year-old and a 14-year old, so it's almost too late for me.) He argues for instance that you need to have a commitment to creating a flourishing mind. You can't just say, I'm going to create a mind that will get up to 120 IQ, but it will be impossible for it to get past 120 IQ. At the same time we need to have a general discussion about is it okay for it to go to 120 million IQ? Perhaps not. We need to have this discussion. What does it mean to have a flourishing mind? What does it mean to have an open mind? To have cognitive self-possession, and at the same time, live in a society with limits?

hughes_ss_slide_18x4440x.pnghughes_ss_slide_19x4440x.png

Licensing superpowers is my last thought here. If you need a license to drive a car to prove that you are not a sociopath and you know what you're doing, what about something that is as dangerous as a car like some kind of advanced expert system? If we only think that governments should be able to own something as destructive and powerful as a nuke, then perhaps only governments should be able to own SI's of a particular level. I'll just leave that thought for discussion later, but I think that's an important idea. That leads to the idea that perhaps there is some kind of technology that we need to talk about banning. Perhaps there are situations in which it is not, in fact, better to be dead than to delay the Singularity.


hughes_ss_slide_20x4440x.png

And with that, I'll say, I do believe a good Singularity is possible. I'm looking forward to living in a roboticized future, a life of leisure, uploading into Second Life, but I think we need to work that out. Thank you very much.