Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Intelligence

Corporations Act To Make US Congress A Wholly Owned Subsidiary

Reading robots’ minds

Genetic Enineering and Preimplantation Genetic Diagnosis

Sorgner @ 3rd World Humanities Forum

Futurism: Go Big


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

instamatic on '2040’s America will be like 1840’s Britain, with robots?' (Oct 31, 2014)

Rick Searle on '2040’s America will be like 1840’s Britain, with robots?' (Oct 31, 2014)

instamatic on '2040’s America will be like 1840’s Britain, with robots?' (Oct 31, 2014)

Rick Searle on '2040’s America will be like 1840’s Britain, with robots?' (Oct 31, 2014)

Peter Wicks on '2040’s America will be like 1840’s Britain, with robots?' (Oct 31, 2014)

Rick Searle on '2040’s America will be like 1840’s Britain, with robots?' (Oct 31, 2014)

Peter Wicks on '2040’s America will be like 1840’s Britain, with robots?' (Oct 31, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


2040’s America will be like 1840’s Britain, with robots?
Oct 26, 2014
(11925) Hits
(23) Comments

Google’s Cold Betrayal of the Internet
Oct 10, 2014
(7821) Hits
(2) Comments

Why oil is getting cheaper
Oct 29, 2014
(5606) Hits
(0) Comments

Should we abolish work?
Oct 3, 2014
(5441) Hits
(1) Comments



IEET > Security > Cyber > SciTech > Rights > Personhood > Life > Innovation > Vision > Futurism > Virtuality > Fellows > Ben Goertzel

Print Email permalink (2) Comments (5987) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


The Global Brain, Existential Risks, and the Future of AGI


Ben Goertzel
By Ben Goertzel
H Plus Magazine

Posted: Apr 15, 2011

The future of humanity involves a complex combination of technological, psychological, and social factors – and one of the difficulties we face in comprehending and crafting this future is that not many people or organizations are adept at handling all of these aspects.

steveDr. Stephen Omohundro is one of the fortunate exceptions to this general pattern, and this is part of what gives his contributions to the futurist domain such a unique and refreshing twist.

Steve has a substantial pedigree and experience in the hard sciences, beginning with degrees in Mathematics and Physics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He was a professor in the computer science department at the University of Illinois at Champaign-Urbana, cofounded the Center for Complex Systems Research, authored the book “Geometric Perturbation Theory in Physics”, designed the programming languages StarLisp and Sather, wrote the 3D graphics system for Mathematica, and built systems which learn to read lips, control robots, and induce grammars.  I’ve had some long and deep discussions with Steve about advanced artificial intelligence, both my own approach and his own unique AI designs.

But he has also developed considerable expertise and experience in understanding and advising human minds and systems.  Via his firm Self-Aware Systems, he has worked with clients using a variety of individual and organizational change processes including Rosenberg’s Non-Violent Communication, Gendlin’s Focusing, Travell’s Trigger Point Therapy, Bohm’s Dialogue, Beck’s Life Coaching, and Schwarz’s Internal Family Systems Therapy.

Steve’s papers and talks on the future of AI, society and technology – including The Wisdom of the Global Brain and Basic AI Drives — reflect this dual expertise in technological and human systems.  In this interview I was keen to mine his insights regarding the particular issue of the risks facing the human race as we move forward along the path of accelerating technological develoment.

Ben:

A host of individuals and organizations — Nick Bostrom, Bill Joy, the Lifeboat Foundation, the Singularity Institute, and the Millennium Project, to name just a few — have recently been raising the issue of the “existential risks” that advanced technologies may post to the human race.  I know you’ve thought about the topic a fair bit as well, both from the standpoint of your own AI work and more broadly.  Could you share the broad outlines of your thinking in this regard?

Steve:

I don’t like the phrase “existential risk” for several reasons. It presupposes that we are clear about exactly what “existence” we are risking. Today, we have a clear understanding of what it means for an animal to die or a species to go extinct. But as new technologies allow us to change our genomes and our physical structures, it will become much less clear when we have lost something precious. Death and extinction become much more amorphous concepts in the presence of extensive self-modification.

It’s easy to identify our humanity with our individual physical form and our egoic minds. But in reality our physical form is an ecosystem, only 10% of our cells are human. And our minds are also ecosystems composed of interacting subpersonalities. And our humanity is as much in our relationships, interconnections, and culture as it is in our individual minds and bodies. The higher levels of organization are much more amorphous and changeable and it will be hard to pin down when something precious is lost.

So, I believe the biggest “existential risk” is related to identifying the qualities that are most important to humanity and to ensuring that technological forces enhance those rather than eliminate them. Already today we see many instances where economic forces act to create “soulless” institutions that tend to commodify the human spirit rather than inspire and exalt it.

Some qualities that I see as precious and essentially human include: love, cooperation, humor, music, poetry, joy, sexuality, caring, art, creativity, curiosity, love of learning, story, friendship, family, children, etc. I am hopeful that our powerful new technologies will enhance these qualities. But I also worry that attempts to precisely quantify them may in fact destroy them. For example, the attempts to quantify performance in our schools using standardized testing have tended to inhibit our natural creativity and love of learning.

Perhaps the greatest challenge that will arise from new technologies will be to really understand ourselves and identify our deepest and most precious values.

Ben:

Yes….  After all, “humanity” is a moving target, and today’s humanity is not the same as the humanity of 500 or 5000 years ago, and humanity of 100 or 5000 years from now – assuming it continues to exist – will doubtless be something dramatically different.  But still there’s been a certain continuity throughout all these changes, and part of that doubtless is associated with the “fundamental human values” that you’re talking about.

Still, though, there’s something that nags at me here.  One could argue that none of these precious human qualities are practically definable in any abstract way, but they only have meaning in the context of the totality of human mind and culture.  So that if we create a fundamentally nonhuman AGI that satisfies some abstracted notion of human “family” or “poetry”, it won’t really satisfy the essence of “family” or “poetry”.  Because the most important meaning of a human value doesn’t lie in some abstract characterization of it, but rather in the relation of that value to the total pattern of humanity.  In this case, the extent to which a fundamentally nonhuman AGI or cyborg or posthuman or whatever would truly demonstrate human values, would be sorely limited.  I’m honestly not sure what I think about this train of thought.  I wonder what’s your reaction.

Steve:

That’s a very interesting perspective! In fact it meshes well with a perspective I’ve been slowly coming to, which is to think of the totality of humanity and human culture as a kind of “global mind”. As you say, many of our individual values really only have meaning in the context of this greater whole. And perhaps it is this greater whole that we should be seeking to preserve and enhance. Each individual human lives only for a short time but the whole of humanity has a persistence and evolution beyond any individual. Perhaps our goal should be to create AGIs that integrate, preserve, and extend the “global human mind” rather than trying solely to mimic individual human minds and individual human values.

Ben:

Perhaps a good way to work toward this is to teach our nonhuman or posthuman descendants human values by example, and by embedding them in human culture so they absorb human values implicitly, like humans do.  In this case we don’t need to “quantify” or isolate our values to pass them along to these other sorts of minds….

Steve:

That sounds like a good idea. In each generation, the whole of human culture has had to pass through a new set of minds. It is therefore well adapted to being learned. Aspects which are not easily learnable are quickly eliminated. I’m fascinated by the process by which each human child must absorb the existing culture, discover his own values, and then find his own way to contribute. Philosophy and moral codes are attempts to codify and abstract the learnings from this process but I think they are no substitute for living the experiential journey. AGIs which progress in this way may be much more organically integrated with human society and human nature. One challenging issue, though, is likely to be the mismatch of timescales. AGIs will probably rapidly increase in speed and keeping their evolution fully integrated with human society may become a challenge.

Ben:

Yes, it’s been amazing to watch that learning process with my own 3 kids, as they grow up.

It’s great to see that you and I seem to have a fair bit of common understanding on these matters.  This reminds me, though, that a lot of people see these things very, very differently.  Which leads me to my next question: What do you think are the biggest misconceptions afoot, where existential risk is concerned?

Steve:

I don’t think the currently fashionable fears like global warming, ecosystem destruction, peak oil, etc. will turn out to be the most important issues. We can already see how emerging technologies could, in principle, deal with many of those problems. Much more challenging are the core issues of identity, which the general public hasn’t really even begun to consider. Current debates about stem cells, abortion, cloning, etc. are tiny precursors of the deeper issues we will need to explore. And we don’t really yet have a system for public discourse or decision making that is up to the task.

Ben:

Certainly a good point about public discourse and decision making systems.  The stupidity of most YouTube comments, and the politicized (in multiple senses) nature of the Wikipedia process, makes clear that online discourse and decision-making both need a lot of work.  And that’s not even getting into the truly frightening tendency of the political system to reduce complex issues to oversimplified caricatures.

Given the difficulty we as a society currently have in talking about, or making policies about, things as relatively straightforward as health care reform or marijuana legalization or gun control, it’s hard to see how our society could coherently deal with issues related to, say, human-level AGI or genetic engineering of novel intelligent lifeforms!

For instance, the general public’s thinking about AGI seems heavily conditioned by science-fiction movies like Terminator 2, which clouds consideration of the deep and in some ways difficult issues that you see when you understand the technology a little better.  And we lack the systems needed to easily draw the general public into meaningful dialogues on these matters with the knowledgeable scientists and engineers.

So what’s the solution? Do you have any thoughts on what kind of system might work better?

Steve:

I think Wikipedia has had an enormous positive influence on the level of discourse in various areas. It’s no longer acceptable to plead ignorance of basic facts in a discussion. Other participants will just point to a Wikipedia entry. And the rise of intelligent bloggers with expertise in specific areas is also having an amazing impact. One example I’ve been following closely are debates and discussions about various approaches to diet and nutrition.

A few years back, T. Colin Campbell’s “The China Study” was promoted as the most comprehensive study of nutrition, health, and diet ever conducted.  The book and the study had a huge influence on people’s thinking about health and diet. A few months ago, 22 year old English major Denise Minger decided to reanalyze the data in the study and found that they did not support the original conclusions. She wrote about her discoveries on her blog and sparked an enormous discussion all over the health and diet blogosphere that dramatically shifted many people’s opinions. The full story can be heard in her interview.

It would have been impossible for her to have had that kind of impact just a few years ago. The rapidity with which incorrect ideas can be corrected and the ease with which many people can contribute to new understanding is just phenomenal. I expect that systems to formalize and enhance that kind of group thinking and inquiry will be created to make it even more productive.

Ben:

Yes, I see – that’s a powerful example.  The emerging Global Brain is gradually providing us the tools needed to communicate and collectively think about all the changes that are happening around and within us. But it’s not clear if the communication mechanisms are evolving fast enough to keep up with the changes we need to discuss and collectively digest….

On the theme of rapid changes, let me now ask you something a little different — about AGI….  I’m going to outline two somewhat caricaturish views on the topic and then probe your reaction to them!

First of all, one view on the future of AI and the Singularity is that there is an irreducible uncertainty attached to the creation of dramatically greater than human intelligence.  That is, in this view, there probably isn’t really any way to eliminate or drastically mitigate the existential risk involved in creating superhuman AGI. So, in this view, building superhuman AI is essentially plunging into the Great Unknown and swallowing the risk because of the potential reward.

On the other hand, an alternative view is that if we engineer and/or educate our AGI systems correctly, we can drastically mitigate the existential risk associated with superhuman AGI, and create a superhuman AGI that’s highly unlikely to pose an existential risk to humanity.

What are your thoughts on these two perspectives?

Steve:

I think that, at this point, we have tremendous leverage in choosing how we build the first intelligent machines and in choosing the social environment that they operate in. We can choose the goals of those early systems and those choices are likely to have a huge effect on the longer-term outcomes. I believe it is analogous to choosing the constitution for a country. We have seen that the choice of governing rules has an enormous effect on the quality of life and the economic productivity of a population.

Ben:

That’s an interesting analogy.  And an interesting twist on the analogy may be the observation that to have an effectively working socioeconomic system, you need both good governing rules,  and a culture oriented to interpreting and implementing the rules sensibly.  In some countries (e.g.  China comes to mind, and the former Soviet Union) the rules as laid out formally are very, very different from what actually happens.  The reason I mention this is: I suspect that in practice,  no matter how good the “rules” underlying an AGI system are,  if the AGI is embedded in a problematic culture, then there’s a big risk for something to go awry.  The quality of any set of rules supplied to guide an AGI is going to be highly dependent on the social context…

Steve:

Yes, I totally agree! The real rules are a combination of any explicit rules written in lawbooks and the implicit rules in the social context. Which highlights again the importance for AGIs to integrate smoothly into the social context.

Ben:

One might argue that we should first fix some of the problems of our cultural psychology, before creating an AGI and supplying it with a reasonable ethical mindset and embedding it in our culture.  Because otherwise the “embedding in our culture” part could end up unintentionally turning the AGI to the dark side!!  Or on the other hand, maybe AGI could be initially implemented and deployed in such a way as to help us get over our communal psychological issues…. Any thoughts on this?

Steve:

Agreed!  Perhaps the best outcome would be technologies that first help us solve our communal psychological issues and then as they get smarter evolve with us in an integrated fashion.

Ben:

On the other hand, it’s not obvious to me that we’ll be able to proceed that way, because of the probability – in my view at any rate – that we’re going to need to rely on advanced AGI systems to protect us from other technological risks.

For instance, one approach that’s been suggested, in order to mitigate existential risks, is to create a sort of highly intelligent “AGI Nanny” or “Singularity Steward.”  This would be a roughly human-level AGI system without capability for dramatic self-modification, and with strong surveillance powers, given the task of watching everything that humans do and trying to ensure that nothing extraordinarily dangerous happens.  One could envision this as a quasi-permanent situation, or else as a temporary fix to be put into place while more research is done regarding how to launch a Singularity safely.

Any thoughts on the sort of AI Nanny scenario?

Steve:

I think it’s clear that we will need a kind of “global immune system” to deal with inadvertent or intentional harm arising from powerful new technologies like biotechnology and nanotechnology. The challenge is to make protective systems powerful enough for safety but not so powerful that they themselves become a problem. I believe that advances in formal verification will enable us to produce systems with provable properties of this type. But I don’t believe this kind of system on its own will be sufficient to deal with the deeper issues of preserving the human spirit.

Ben:

What about the “one AGI versus many” issue?  One proposal that’s been suggested, to mitigate the potential existential risk of human-level or superhuman AGIs, is to create a community of AGIs and have them interact with each other, comprising a society with its own policing mechanisms and social norms and so forth.  The different AGIs would then keep each other in line.  A “social safety net” so to speak.

Steve:

I’m much more drawn to “ecosystem” approaches which involve many systems of different types interacting with one another in such a way that each acts to preserve the values we care about. I think that alternative singleton “dictatorship” approaches could also work but they feel much more fragile to me in that design mistakes might become rapidly irreversible.  One approach to limiting the power of individuals in an ecosystem is to limit the amount of matter and free energy they may use while allowing them freedom within those bounds. A challenge to that kind of constraint is the formation of coalitions of small agents that act together to overthrow the overall structure. But if we build agents that want to cooperate in a defined social structure, then I believe the system can be much more stable. I think we need much more research into the space of possible social organizations and their game theoretic consequences.

Ben:

Finally – bringing the dialogue back to the practical and near-term – I wonder what you think society could be doing now to better militate against existential risks … from AGI or from other sources?

Steve:

Much more study of social systems and their properties, better systems for public discourse and decision making, deeper inquiry into human values, improvements in formal verification of properties in computational systems.

Ben:

That’s certainly sobering to consider, given the minimal amount of societal resources currently allocated to such things, as opposed to for example the creation of weapons systems, better laptop screens or chocolaty-er chocolates!

To sum up, it seems one key element of your perspective is the importance of deeper collective (and individual) self-understanding – deeper intuitive and intellectual understanding of the essence of humanity.  What is humanity, that it might be preserved as technology advances and wreaks its transformative impacts?  And another key element is your view is that social networks of advanced AGIs are more likely to help humanity grow and preserve its core values, than isolated AGI systems.  And then there’s your focus on the wisdom of the global brain.  And clearly there are multiple connections between these elements, for instance a focus on the way ethical, aesthetic, intellectual and other values emerge from social interactions between minds.  It’s a lot to think about … but fortunately none of us has to figure it out on our own!


Ben Goertzel Ph.D. is a fellow of the IEET, and founder and CEO of two computer science firms Novamente and Biomind, and of the non-profit Artificial General Intelligence Research Institute (agiri.org).
Print Email permalink (2) Comments (5988) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Ben:

Thank you for the interview and for the attention to existential risks.

Although this particular piece was not focused specifically on education, I was left wondering, “how do we get there [the best outcome] from here [simplified caricatures]?”  Yes, access to au courant knowledge has been facilitated (albeit in an ad hoc, tacking manner) by the emergence of Wikipedia, blogging, and the diligence of fact-checkers who refuse to accept inherited knowledge, as in your example of Ms. Minger.  But we are still at a concrete impasse in which bricks-and-mortar schools and traditional methods continue to make up an educational system about which Heinz von Foerster wrote was “geared to generate predictable citizens [and to] amputate the bothersome internal states which generate unpredictability and novelty.”  While Omohundro talked about “limiting the power of individuals in an ecosystem,” the quandary is that, in order to proceed, we actually first need individuals (wise, informed individuals) to exert the power that will alter the educational landscape.

I know, Ben, that last year you contributed an essay to IEET on “The End of Education” where you postulated that “education wants to be free . . .free of schools and traditional educational methodology.”  Nevertheless it strikes me as pollyanna-ish to expect the transfer from formal education to a ubiquitous learning model based on interactivity and connectivity to happen seamlessly and without intervention.  Such an effort, here in the U.S, is going to require strong political will and a good deal of sturdy shoulders and backs to push the wheels out of the mire.

As you mentioned in your conversation with Steve, “It’s not clear if our communication mechanisms are evolving fast enough to keep up with the changes . . .”  The communications mechanism and informal learning channels —macro process issues—that you have identified are key, but the devil is ever in the details: that means elaborating concrete steps to move from received ideas about fixed-time-and-place education to extracting the potential of IT and interface design to enmesh education into daily life. How can an informed U.S. public best influence its political and institutional leaders to take seriously the shift toward proliferation of intelligence as a matter of education policy?  What is the best stepwise manner to engage in a transformation from classroom learning to ubiquitous learning? To what degree of fine control can informed public hope to exert upon ever-more intelligent and interactive systems in order for these to dovetail with human aspirations? 

I am not widely read enough on this to know if anyone has ventured an alternative blueprint based on transhumanist/Cosmist principles that would identify the way to begin jettisoning the many sclerotic elements from our test-battery-and-homework approach; embracing an interactive model that emphasizes mobile tools and online libraries; as well as addressing the significant technical, normative and ethical changes ahead.

Obviously such a project asks a lot and cannot be solved by a lone individual.  But the transition, if it is to bring along the greatest number of citizens and not leave in its wake the bulk of those who are currently obliged to our current K-8 system, will require proposals that are highly detailed while being mindful of, as you pointed out, social context and the “problems in our cultural psychology.”

Pardon my editorializing, but this, to my mind, has everything to do with understanding existential risk.  When you asked “What is humanity that it might be preserved as technology advances . .  . ” it strikes me that both the preservation and the advance are going to be in trouble without citizen participation.  And the only way to ensure that is to reform education upfront and not wait for reforms to be implemented willy-nilly.





“But in reality our physical form is an ecosystem, only 10% of our cells are human.”
What does this mean? Does it mean the other 90% we share with other species, is it a reference to being hosts for bacteria, or…? Could you please clarify?





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Our Worst Frailty: An Electro Magnetic “Hit”

Previous entry: Pellissier Appointed as IEET Affiliate Scholar

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376