Good Ancestors... But Who Are Our Descendants?
Jamais Cascio
2007-02-09 00:00:00
URL

metropolisx4440x.jpgThe two-day Good Ancestor Principle workshop focused primarily upon teasing out just what it would mean to be a good ancestor, and a bit upon exploring various ways of making sure the Earth inherited by our descendants is better than the Earth we inherited. But a surprisingly large part of the conversation covered a question that is at once unexpected and entirely relevant: just who will our descendants be?

The baseline assumption, not unreasonably, was that our descendants will be people like us, individuals living deep within the "human condition" of pain, love, family, death, and so forth; as a result, the "better ancestors" question inevitably focuses upon the external world of politics, warfare, the global environment, poverty, and so forth (essentially, the WorldChanging arena). Some participants suggested a more radical vision, of populations with genetic enhancements including extreme longevity. Sadly, this part of the conversation never managed to get much past the tired "how will the Enhanced abuse the Normals" tropes, so we never really got to the "...and how can we be good ancestors to them?" question, other than to point out that we ourselves may be filling in the role of "descendants" if we end up living for centuries.

Vernor VingeInstead, we ran right past the "human++" scenario right into the Singularity -- and with Vernor Vinge in attendance, this is hardly surprising. (Not that Vinge is dead-certain that the Singularity is on its way; when he speaks next week at the Long Now seminar in San Francisco, he'll be covering what change looks like in a world where a Singularity doesn't happen.) This group of philosophers and writers really take the Singularity concept seriously, and not for Kurzweilian "let's all get uploaded into Heaven 2.0" reasons. Their recurring question had a strong evolutionary theme: what niche is left for humans if machines become ascendant?

Ben Goertzel describes his generalized AI modelThe conversation about the Singularity touched on more than science fiction stories, because of the attendance of Ben Goertzel, a cognitive science/computer science specialist who runs a company called "Novamente" -- a company with the express goal of creating the first Artificial General Intelligence (AGI). He has a working theory of how to do it, some early prototypes (that for now exist solely in virtual environments), and a small number of employees in the US and Brazil. He says that with the right funding, his team would be able to produce a working AGI system within ten years. With his current funding, it might take a bit longer.

According to Goertzel, the Singularity would happen fairly shortly after his AGI wakes up.

It was a surreal moment for me. I've been writing about the Singularity and related issues for years, and have spoken to a number of people who were working on related technologies or were major enthusiasts of the concept (the self-described "Singularitarians"). This was the first time I sat down with someone who was both. Goertzel is confident of his vision, and quite clear on the potential outcomes, many of which would be unpleasant for humankind. When I spoke to my wife mid-way through the first day, I semi-jokingly told her that I'd just met the man who was going to destroy the world.

Ben doesn't actually want that to happen, as far as I can tell, and has made a point of considering from the very beginning of his work the problem of giving super-intelligent machines a sense of ethics that would preclude them from wanting to make choices that would be harmful to humankind.

In 2002, he wrote:

...I would like an AGI to consider human beings as having a great deal of value. I would prefer, for instance, if the Earth did not become densely populated with AGI’s that feel about humans as most humans feel about cows and sheep – let alone as most humans feel about ants or bacteria, or instances of Microsoft Word. To see the potential problem here, consider the possibility of a future AGI whose intelligence is as much greater than ours, as ours is greater than that of a sheep or an ant or even a bacterium. Why should it value us particularly? Perhaps it can create creatures of our measly intelligence and complexity level without hardly any effort at all. In that case, can we really expect it to value us significantly? This is not an easy question.

Beyond my attachment to my own species, there are many general values that I hold, that I would like future AGI’s to hold. For example, I would like future AGI’s to place a significant value on:


  1. Diversity
  2. Life: relatively unintelligent life like trees and protozoa and bunnies, as well as intelligent life like humans and dolphins and other AGI’s.
  3. The generation of new pattern (on “creation” and “creativity” broadly conceived)
  4. The preservation of existing structures and systems

  5. The happiness of other intelligent or living systems (“compassion” )
  6. The happiness and continued existence of humans



(From his essay "Thoughts on AI Morality," in which he quotes both Ray Kurzweil and Jello Biafra.)

The issue of how to give AGIs a sense of empathy towards humans consumed a major part of the Good Ancestor Principle workshop discussion. The participants recognized quickly that what this technology meant was the creation of a parallel line of descendants of humankind. In essence, the answer to the question of "how can we be better ancestors for our descendants" is answered in part by "making sure our other descendants are helpful, not harmful."

Ultimately, the notion of being good ancestors by reducing the chances that our descendants will be harmed appeared in nearly every attempt to answer Jonas Salk's challenge. It's a point that's both obvious and subtle. Of course we want to reduce the chances that our descendants will be harmed; the real challenge is figuring out just what we are doing today that runs counter to that desire. We don't always recognize the longer-term harm emerging from a short-term benefit. This goes back to an argument I've made time and again: the real problems we're facing in the 21st century are the long, slow threats. We need our solutions to have a long term consciousness, too.

That strikes me as an important value for any intelligent being to hold, organic or otherwise.