Bostrom, de Grey, Rushkoff answer Edge’s Big Question for 2009

Jan 2, 2009

Edge.com asked 150 of the most visionary minds on the planet - including the IEET’s Nick Bostrom, Aubrey de Grey and Douglas Rushkoff - the question “What will change everything?”

NICK BOSTROM

Philosopher, University of Oxford; Editor, Human Enhancement

SUPERINTELLIGENCE

Intelligence is a big deal. Humanity owes its dominant position on Earth not to any special strength of our muscles, nor any unusual sharpness of our teeth, but to the unique ingenuity of our brains. It is our brains that are responsible for the complex social organization and the accumulation of technical, economic, and scientific advances that, for better and worse, undergird modern civilization.

All our technological inventions, philosophical ideas, and scientific theories have gone through the birth canal of the human intellect. Arguably, human brain power is the chief rate-limiting factor in the development of human civilization.

Unlike the speed of light or the mass of the electron, human brain power is not an eternally fixed constant. Brains can be enhanced. And, in principle, machines can be made to process information as efficiently as — or more efficiently than — biological nervous systems.

There are multiple paths to greater intelligence. By “intelligence” I here refer to the panoply of cognitive capacities, including not just book-smarts but also creativity, social intuition, wisdom, etc.

Let’s look first at how we might enhance our biological brains. There are of course the traditional means: education and training, and development of better methodologies and conceptual frameworks. Also, neurological development can be improved through better infant nutrition, reduced pollution, adequate sleep and exercise, and prevention of diseases that affect the brain. We can use biotech to enhance cognitive capacity, by developing pharmaceuticals that improve memory, concentration, and mental energy; or we could achieve these ends with genetic selection and genetic engineering. We can invent external aids to boost our effective intelligence — notepads, spreadsheets, visualization software.

We can also improve our collective intelligence. We can do so via norms and conventions — such as the norm against using ad hominem arguments in scientific discussions — and by improving epistemic institutions such the scientific journal, anonymous peer review, and the patent system. We can increase humanity’s joint problem-solving capacity by creating more people or by integrating a greater fraction of the world’s existing population into productive endeavours, and we can develop better tools for communication and collaboration — various internet applications being recent examples.

Each of these ways of enhancing individual and collective human intelligence holds great promise. I think they ought to be vigorously pursued. Perhaps the smartest and wisest thing the human species could do would be to work on making itself smarter and wiser.

In the longer run, however, biological human brains might cease to be the predominant nexus of Earthly intelligence.

Machines will have several advantages: most obviously, faster processing speed — an artificial neuron can operate a million times faster than its biological counterpart. Machine intelligences may also have superior computational architectures and learning algorithms. These “qualitative” advantages, while harder to predict, may be even more important than the advantages in processing power and memory capacity. Furthermore, artificial intellects can be easily copied, and each new copy can — unlike humans — start life fully-fledged and endowed with all the knowledge accumulated by its predecessors. Given these considerations, it is possible that one day we may be able to create “superintelligence”: a general intelligence that vastly outperforms the best human brains in every significant cognitive domain.

The spectrum of approaches to creating artificial (general) intelligence ranges from completely unnatural techniques, such as those used in good old-fashioned AI, to architectures modelled more closely on the human brain. The extreme of biological imitation is whole brain emulation, or “uploading”. This approach would involve creating a very detailed 3d map of an actual brain — showing neurons, synaptic interconnections, and other relevant detail — by scanning slices of it and generating an image using computer software. Using computational models of how the basic elements operate, the whole brain could then be emulated on a sufficiently capacious computer.

The ultimate success of biology-inspired approaches seems more certain, since they can progress by piecemeal reverse-engineering of the one physical system already known to be capable of general intelligence, the brain. However, some unnatural or hybrid approach might well get there sooner.

It is difficult to predict how long it will take to develop human-level artificial general intelligence. The prospect does not seem imminent. But whether it will take a couple of decades, many decades, or centuries, is probably not something that we are currently in a position to know. We should acknowledge this uncertainty by assigning some non-trivial degree of credence to each of these possibilities.

However long it takes to get from here to roughly human-level machine intelligence, the step from there to superintelligence is likely to be much quicker. In one type of scenario, “the singularity hypothesis”, some sufficiently advanced and easily modifiable machine intelligence (a “seed AI”) applies its wits to create a smarter version of itself. This smarter version uses its greater intelligence to improve itself even further. The process is iterative, and each cycle is faster than its predecessor. The result is an intelligence explosion. Within some very short period of time — weeks, hours — radical superintelligence is attained.

Whether abrupt and singular, or more gradual and multi-polar, the transition from human-level to superintelligence would of pivotal significance. Superintelligence would be the last invention biological man would ever need to make, since, by definition, it would be much better at inventing than we are. All sorts of theoretically possible technologies could be developed quickly by superintelligence — advanced molecular manufacturing, medical nanotechnology, human enhancement technologies, uploading, weapons of all kinds, lifelike virtual realities, self-replicating space-colonizing robotic probes, and more. It would also be super-effective at creating plans and strategies, working out philosophical problems, persuading and manipulating, and much else beside.

It is an open question whether the consequences would be for the better or the worse. The potential upside is clearly enormous; but the downside includes existential risk. Humanity’s future might one day depend on the initial conditions we create, in particular on whether we successfully design the system (e.g., the seed AI’s goal architecture) in such a way as to make it “human-friendly” — in the best possible interpretation of that term.


AUBREY DE GREY

Gerontologist; Chairman & Chief Science Officer. the Methuselah Foundation; Author, Ending Aging

THE UNMASKING OF TRUE HUMAN NATURE

Since I think I have a fair chance of living long enough to see the defeat of aging, it follows that I expect to live long enough to see many momentous scientific and technological developments. Does one such event stand out? Yes and no.

You don’t have to be a futurophile, these days, to have heard of “the Singularity”. What was once viewed as an oversimplistic extrapolation has now become mainstream: it is almost heterodox in technologically sophisticated circles not to take the view that technological progress will accelerate within the next few decades to a rate that, if not actually infinite, will so far exceed our imagination that it is fruitless to attempt to predict what life will be like thereafter.

Which technologies will dominate this march? Surveying the torrent of literature on this topic, we can with reasonable confidence identify three major areas: software, hardware and wetware. Artificial intelligence researchers will, numerous experts attest, probably build systems that are “recursively self-improving” — that understand their own workings well enough to design improvements to themselves, thereby bootstrapping to a state of ever more unimaginable intellectual performance.

On the hardware side, it is now widely accepted as technically feasible to build structures in which every atom is exactly where we wish it to be. The positioning of each atom will be painstaking, so one might view this as of purely academic interest — if not for the prospect of machines that can build copies of themselves. Such “assemblers” have yet to be completely designed, let alone built, but cellular automata research indicates that the smallest possible assembler is probably quite simple and small. The advent of such devices would rather thoroughly remove the barrier to practicability that arises from the time it takes to place each atom: exponentially accelerating parallelism is not to be sneezed at.

And finally, when it comes to biology, the development of regenerative medicine to a level of comprehensiveness that can give a few extra decades of healthy life to those who are already in middle age will herald a similarly accelerating sequence of refinements — not necessarily accelerating in terms of the rate at which such therapies are improved, but in the rate at which they diminish our risk of succumbing to aging at any age, as I’ve described using the concept of “longevity escape velocity”.

I don’t single out one of these areas as dominant. They’re all likely to happen, but all have some way to go before their tipping point, so the timeframe for their emergence is highly speculative. Moreover, each of them will hasten the others: superintelligent computers will advance all technological development, molecular machines will surpass enzymes in their medical versatility, and the defeat of our oldest and most implacable foe (aging) will raise our sights to the point where we will pursue other transformative technologies seriously as a society, rather than leaving them to a few rare visionaries. Thus, any of the three — if they don’t just wipe us all out, but unlike Martin Rees I personally think that is unlikely — could be “the one”.

Or… none of them. And this is where I return to the Singularity. I’ll get to human nature soon, fear not.

When I discuss longevity escape velocity, I am fond of highlighting the history of aviation. It took centuries for the designs of da Vinci (who was arguably not even the first) to evolve far enough to become actually functional, and many confident and smart engineers were proven wrong in the meantime. But once the decisive breakthrough was made, progress was rapid and smooth. I claim that this exemplifies a very general difference between fundamental breakthroughs (unpredictable) and incremental refinements (remarkably predictable).

But to make my aviation analogy stick, I of course need to explain the dramatic lack of progress in the past 40 years (since Concorde). Where are our flying cars? My answer is clear: we haven’t developed them because we couldn’t be bothered, an obstacle that is not likely to occur when it comes to postponing aging. Progress only accelerates while provided with impetus from human motivation. Whether it’s national pride, personal greed, or humanitarian concern, something — someone — has to be the engine room.

Which brings me, at last, to human nature. The transformative technologies I have mentioned will, in my view, probably all arrive within the next few decades — a timeframe that I personally expect to see. And we will use them, directly or indirectly, to address all the other slings and arrows that humanity is heir to: biotechnology to combat aging will also combat infections, molecular manufacturing to build unprecedentedly powerful machines will also be able to perform geoengineering and prevent hurricanes and earthquakes and global warming, and superintelligent computers will orchestrate these and other technologies to protect us even from cosmic threats such as asteroids — even, in relatively short order, nearby supernovae. (Seriously.) Moreover, we will use these technologies to address any irritations of which we are not yet even aware, but which grow on us as today’s burdens are lifted from our shoulders. Where will it all end?

You may ask why it should end at all — but it will. It is reasonable to conclude, based on the above, that there will come a time when all avenues of technology will, roughly simultaneously, reach the point seen today with aviation: where we are simply not motivated to explore further sophistication in our technology, but prefer to focus on enriching our and each other’s lives using the technology that already exists. Progress will still occur, but fitfully and at a decelerating rather than accelerating rate. Humanity will at that point be in a state of complete satisfaction with its condition: complete identity with its deepest goals. Human nature will at last be revealed.


DOUGLAS RUSHKOFF

Media Analyst; Documentary Writer; Author, Get Back in the Box

THE DISCOVERY OF INTELLIGENT LIFE FROM SOMEWHERE ELSE

We’re talking about changing everything — not just our abilities, relationships, politics, economy, religion, biology, language, mathematics, history and future, but all of these things at once. The only single event I can see shifting pretty much everything at once is our first encounter with intelligent, extra-terrestrial life.

The development of any of our current capabilities — genetics, computing, language, even compassion — all feel like incremental advances in existing abilities. As we’ve seen before, the culmination of one branch of inquiry always just opens the door to a new a new branch, and never yields the wholesale change of state we anticipated. Nothing we’ve done in the past couple of hundred thousand years has truly changed everything, so I don’t see us doing anything in the future that would change everything, either.

No, I have the feeling that the only way to change everything is for something be done to us, instead. Just imagining the encounter of humanity with an “other” implies a shift beyond the solipsism that has characterized our civilization since our civilization was born. It augurs a reversal as big as the encounter of an individual with its offspring, or a creature with its creator. Even if it’s the result of something we’ve done, it’s now independent of us and our efforts.

To meet a neighbor, whether outer, inner, cyber- or hyper- spatial, finally turns us into an “us.” To encounter an other, whether a god, a ghost, a biological sibling, an independently evolved life form, or an emergent intelligence of our own creation, changes what it means to be human.

Our computers may never inform us that they are self-aware, extra-terrestrials may never broadcast a signal to our SETI dishes, and interdimensional creatures may never appear to those who aren’t taking psychedelics at the time — but if any of them did, it would change everything.


Read all the answers here.