Aubrey talk on Longevity and the Singularity transcribed

Sep 8, 2007

The People Database Project has transcribed a talk by Aubrey de Grey on “Longevity Escape Velocity and the Singularity” from the 2006 Singularity Summit hosted by the Singularity Institute for Artificial Intelligence.. Video and audio are also online.

Longevity Escape Velocity and the Singularity

My name is Aubrey de Grey. I am a biomedical gerontologist, which means that I work on developing future technologies that can postpone and indeed defeat the biological of aging in humans, and thereby save a hundred thousand lives a day. I work as an advisor to the Singularity Institute for a number of reasons. Firstly, because future technologies, technologies that aim high to change and transform the human condition, have a lot in common: psychologically, philosophically, and technically. So, I have that sort of connection with artificial intelligence. Secondly, because I actually used to work in artificial intelligence research, before I was a biologist, twenty years ago. And thirdly, because the philosophy and the technology come together when we consider the social context. So, a lot of my work involves discussing and developing ideas with regard to how this will change society. And the changes resulting from one really profound technological advance tend to be overlapping with the changes that will happen in response to a different one.

The singularity is a phase change, if you like, in the rate of progress: a situation in which we are accelerating the advance of certain technologies so rapidly that the predictability of how things are going to look becomes not merely difficult in terms of time frames, but difficult qualitatively. Difficult in terms of what this technology and our relationship to it will look like when the technology arrives. My feeling about the distinction between the singularity as normally defined as this sort of phase change, as opposed to the concept of smarter-than-human intelligence, is a bit of a coincidence. Because the thing is that we don’t know how to really achieve smarter-than-human intelligence, and therefore, there’s a lot of thinking that’s gone into this that tells us that we will do it only by developing what are called recursively self-improving systems. Systems that understand themselves and their own workings well enough to be able to automatically implement improvements to their own functionality. Once we get to that point, we have what people often call a “hard take-off.” In other words, an increase in the functionality of these machines that is far faster than anything that has happened before. And that will achieve what we will call the singularity, as people describe it, very quickly indeed. So, it’s sort of a coincidence.

There is a demarcation in gerontology that says why things are going to be so different in the future. But it’s a rather different one than the singularity. I usually call it “longevity escape velocity.” But it’s a rather less dramatic phenomenon, a rather less unique phenomenon than the singularity itself. Something that in fact we have seen already in the development of other technologies, like powered flight, computers or whatever. The longevity escape velocity concept simply says that we have a problem to solve - aging. It’s a problem with many parts to it. Some of those parts are harder to solve than others. We need to solve them all, because we are only as strong as our weakest link. But others of them are more additive, such that it is okay to solve just part of the problem, and doing that buys time, so that we can solve more.

Now, the concept of longevity escape velocity is a sort of quantitative description of how fast we need to buy time in order to buy more time, to buy more time, and so on. And it’s a very much more uncontroversial and likely, I would say, concept than the singularity itself. The thing about the singularity, you see, is that it requires enthusiasm, it requires continued public pressure, financial pressure and so on, for things to actually come to pass. Very much in the same way that in powered flight, for example, once we got off the ground with the Wright brothers, we moved progressively through transatlantic flight, jetliners, supersonic airliners, but then, where are those flying cars, you know? Why aren’t we already on Mars, in terms of space flight? The answer is: We can’t be bothered, basically. So, my only reservations about whether the singularity is actually going to happen on schedule, so to speak, is with regard to that component of the requirement.

The real big deal about the singularity is the negative side rather than the positive side. So, let me go back for a minute to my own work, to gerontology. I’m a biologist. I like mainly to work on making [radical life extension] happen as soon as possible, but I spend a lot of my time on the social context. Not just thinking about it but talking about it. And a large part of the reason I do that is because I want to forward plan, and I want society to forward plan, to cope with the turbulence that will result in society when we suddenly figure out that we don’t need to die of aging anymore. This pandemonium will actually hit society not when those technologies arrive, but much sooner: when those technologies become widely anticipated. And that could be only ten years away. So I think that it’s really important to put time into this.

Now, the Singularity Institute focuses on a similar problem. Mainly, the possibility that if we develop these machines that are recursively self-improving and become extraordinarily intelligent very quickly indeed, they may not like us terribly much. They may decide that we are not very important. That would be a bad thing. It would be a bad thing from our point-of-view, at least. Maybe from an objective observer’s point-of-view, it wouldn’t be such a bad thing, but we are not objective. So, the question is, Can we develop machines that can be recursively self-improving but which maintain themselves reliably, within a subset of potential machines of this sort, that carry on thinking of human beings as important, as indeed their main raison d’être. In order to make sure that this happens, one has to do some very careful planning that makes these machines in a way that retains that property. It’s worse than that. The Singularity Institute takes the view that we’ve got to get there first. If someone comes along and invents recursively self-improving systems that do not have this, what we like to call “friendliness” property, they will get rid of humanity rather quickly if we’re unlucky. So, we’d better develop Friendly AI before anyone else accidentally develops Unfriendly AI. Now, whether Friendly AI is even possible is unknown at this point. Whether it is possible to invent machines which you can give the freedom to improve themselves, without giving them the freedom to become Unfriendly. We just don’t know whether that’s possible. But, it’s worth trying.

A lot of the work that the Singularity Institute does is philosophy, rather than computing. In other words, they are trying to figure out what ‘Friendly’ actually means, how to define it. Now, ultimately, the correct definition is the outcome. ‘Friendly’ means they don’t kill everybody, for example. ‘Friendly’ means they defend us against things that might kill us. But, in terms of implementing that, we have to figure it out in detail. We have to try and find analytical, precise descriptions, and so that’s why a lot of the Singularity Institute’s work focuses on that. One of the really hard questions for me, both as a scientist in biology and as an advisor for the Singularity Institute, is that I don’t really know how far science can go in defining things describing things. And the main reason why I don’t know is because in my heart I’m not actually firstly a scientist. Firstly, I’m a technologist, an engineer. And this helps me enormously. This has been the reason why I have been able to make an important contribution to the field of the biology of aging. I have been able to make insights that realize that we can now engage in the rational design of interventions that might actually work, even if they are very ambitious interventions that we will maybe take 20 or 30 years to actually implement. But it’s the same deal with artificial intelligence. We know basically what we would like to achieve, but when we have to define what we’d like to achieve in algorithmic terms, which of course is what computers work on, then we have to transform our intuitive grasp of all this into something much more analytical, and I certainly don’t know whether that is possible.

The Singularity Institute, like any forward-thinking group, has to draw on a wide range of expertise. They have brought me in as an advisor because of the overlap between the experience and expertise that I have and what they need. So, some of that overlap is at the technical level. I have a certain amount of experience as a researcher in artificial intelligence, which is what I used to do before I got into biology. A lot of it is more broad based. For example, I’m a high profile scientist in the sense of that I do a lot of media in respect to my own work. It’s important for the Singularity Institute to get advice from people like me on how to get good exposure. It’s also important to understand how these various sciences interact with each other, and they do interact with each other for sure. I mean, one of the reasons why we would like to have really intelligent machines is they might accelerate the development of technologies that we would like. Like, for example, medicine that would extend lifespan. And so there are an awful lot of different dimensions through which I can interact usefully with the Singularity Institute.

The motivations for supporting any futurist enterprise, whether it be in computation, artificial intelligence, whether it be biomedical, the motivations are pretty much the same. Whatever your actual resources are, whether you are a billionaire who has got money to offer, or a programmer, or a biologist who has expertise to offer, whether you’re a journalist who has got dissemination to offer, ultimately all of these people should be thinking about these things in the same way. In terms of why people should support the singularity at all, first of all one must define what one means by the term ’supporting the singularity.’ Would the singularity be a good thing? That’s a big questions, and I’m not sure it would be a good thing. I think that a manageable rate of progress, a rate of progress that can be achieved within the traditional hardware that we ourselves have evolved to possess may end up being the best way to go. There may be a concept of ‘too fast.’

On the other hand, when we look at the motivations that the Singularity Institute has, namely the business of creating Friendly AI before anyone else accidentally creates Unfriendly AI, this is something where speed does matter. Where essentially the purpose is not to cause us to have to move faster through the levels of intelligence, but rather to defend us from this sort of thing happening accidentally. The question of how the singularity will occur comes back to the idea that progress can accelerate. And ultimately what limits the rate of the acceleration of progress are the tedious things, the hardware. I think it’s reasonable to suppose that if we were to develop recursive self-improving computing-based machines, then they would achieve the singularity in next to no time. If we were to rely on biological hardware and on culture to achieve the singularity, then it would still be achieved. The reason why the term transhumanism is often described these days as a transition towards a post-human situation is because people think in terms of our eventual development to a situation where we would regard humans of the 20th century as subhuman, in the same way that we may currently regard Neanderthals as subhuman.

Now, I actually don’t like this way of describing things. I think that it is more appropriate to regard transhumanism the way it was originally defined in the ’50s by Julian Huxley, as transcending, incrementally, our previous humanity. So, remaining human was part of Huxley’s definition. And for the singularity, the same logic applies. If computers had never been invented, then a bunch of things would not be as advanced as they are now, and our rate of advance would be slower. But in some sense that would be okay. Our rate of advance would still be accelerating, just with a smaller exponent. There are some complications to Ray Kurzweil’s arguments with regard to the time frames for when the singularity will develop. Ray certainly bases most of his projections on work of the sort that the Singularity Institute is doing, work that involves intensive and recursively self-improving silicon-based hardware. However, more than that, Ray’s logic, to my mind, at least, relies to a profound extent on the maintenance of public enthusiasm for progress. He does not just talk about computers. He talks about all manner of different types of technology that have happened in the recent past. And I think it’s important to remember that even though you get these accelerating advances, things like Moore’s Law happening, they sometimes peter out. And they peter out when people can’t be bothered anymore.

There’s a reason why we didn’t get to Mars in the 1980’s. We didn’t have the national pride component that we had for the moon in the 1960’s. And I think at this point that it is very unclear whether that public enthusiasm component will actually come to pass in the time frames we are talking about. The acid test will come when things happen in the laboratory. Whether it be in terms of life extension, whether it be in terms of computing, things happen in the laboratory that begin to get people to understand that now is the time to be thinking about these questions. The critical point here is, at that point, it could go either way. That certainly is the point where if enthusiasm is going to happen that enthusiasm snowballs, and you go from zero to infinity in no time at all, like what happened with the internet, for example. But it’s also, conversely, the time when it could go the opposite way, when people start to take the thing seriously and say, ‘Well, actually, this wasn’t such a good idea.’ And maybe, it will be too late. We can look at nuclear weapons, for example. Arguably, we should have stood back and thought about things a bit more before we invented these bombs. And one can think of other examples. But it might be that at that time we step back and think about things soon enough and just decide we’re not going that way. Maybe the current situation with climate change is the same. Maybe we are actually beginning to get our act together. And, maybe, if we are very lucky, we’ll get our act together soon enough.

The main thing about computing technology is that it doesn’t cost much. I mean, of course, there are certain applications for computers that are extraordinarily CPU intensive where you spend a lot of money. But algorithm development, which is basically what the development of artificial intelligence is about, is something that ultimately goes on in people’s heads, and you can do it on relatively cheap hardware. So, I think it’s relatively likely that such advances as do occur in the development of artificial intelligence may well occur without such a concerted public effort as would be necessary for going to the moon, or inventing nuclear weapons, or sequencing the human genome. It remains to be seen, but it may just not fit into that paradigm. I think the main reason why the Singularity Institute is important is the reason that they say they’re important. Namely, the defense against other people inventing careless AI that wipes us all out. This is something that many people in the mainstream computing and artificial intelligence community still think is so far away that we just don’t need to worry about it yet. But a lot of very smart people have thought about this carefully from a bunch of different angles and have come to the conclusion that actually we can’t be so sure. I think it is important to have a sense of proportion about this, just in the same way that there are a bunch of other things that we think about whether we ought to do because they might wipe out the human race. We need to think about this, and it’s not too soon.

I am a biologist and I work on the biology of aging. So I feel I can give an informed estimate of how long it is going to take us to achieve what I like to call longevity escape velocity, where we are effectively no longer dying from age-related causes. However, it is a very broad speculative estimate. I think we have a 50% chance of getting there within 25 or 30 years, subject to good funding in the next ten years to prove the concept. But if we get unlucky and we discover new obstacles we didn’t know about before, it could be easily take 100 years. That’s okay to me, because 50% is well worth fighting for. When it comes to the Singularity, I’m an advisor on this, from various perspectives, but I’m definitely not a cutting edge artificial intelligence researcher, as I used to be. So I don’t feel qualified to evaluate the probability of the time frames that the Singularity Institute talks about or that Ray Kurzweil talks about.

aubreyvideointerview_tn.png