Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Last Things: Cold Comfort in the Far Future

What is the Future of the Sharing Economy?

Don’t Diss Dystopias: Sci-fi’s warning tales are as important as its optimistic stories.

And The Least Peaceful Places On Earth Are… | Global Peace Index 2014

Supertasking and Mindfulness

Will Brain Wave Technology Eliminate the Need for a Second Language?


ieet books

A Taxonomy and Metaphysics of Mind-Uploading
Author
Keith Wiley


comments

hankpellissier on 'Supertasking and Mindfulness' (Sep 30, 2014)

bubble13 on 'How Do You Filter Content in an Age of Abundance?' (Sep 29, 2014)

Dick Burkhart on 'The Obvious Relationship Between Climate and Family Planning—and Why We Don’t Talk About' (Sep 29, 2014)

instamatic on 'Dawkins and the "We are going to die" -Argument' (Sep 29, 2014)

Taiwanlight on 'Dawkins and the "We are going to die" -Argument' (Sep 27, 2014)

Farrah Greyson on 'Are Technological Unemployment and a Basic Income Guarantee Inevitable or Desirable?' (Sep 27, 2014)

instamatic on 'Dawkins and the "We are going to die" -Argument' (Sep 26, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Transhumanism and Marxism: Philosophical Connections

Sex Work, Technological Unemployment and the Basic Income Guarantee

Technological Unemployment but Still a Lot of Work…

Hottest Articles of the Last Month


Why and How Should We Build a Basic Income for Every Citizen?
Sep 16, 2014
(14612) Hits
(7) Comments

MMR Vaccines and Autism: Bringing clarity to the CDC Whistleblower Story
Sep 14, 2014
(5325) Hits
(1) Comments

An open source future for synthetic biology
Sep 9, 2014
(5085) Hits
(0) Comments

Steven Pinker’s Guide to Classic Style
Sep 11, 2014
(4141) Hits
(0) Comments



IEET > Security > Cyber > Resilience > Rights > Vision > Futurism > Fellows > Ben Goertzel

Print Email permalink (17) Comments (6494) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Does Humanity Need an AI Nanny?


Ben Goertzel
By Ben Goertzel
H+ Magazine

Posted: Sep 5, 2011

The more I think about it, the more I wonder whether some form of AI Nanny might well be the best path forward for humanity – the best way for us to ultimately create a Singularity according to our values.



The ongoing advancement of science and technology has brought us many wonderful things, and will almost surely be bringing us more and more – most likely at an exponential, accelerating pace, as Ray Kurzweil and others have argued.  Beyond the “mere” abolition of scarcity, disease and death, there is the possibility of fundamental enhancement of the human mind and condition, and the creation of new forms of life and intelligence.  Our minds and their creations may spread throughout the universe, and may come into contact with new forms of matter and intelligence that we can now barely imagine.

But, as we all know from SF books and movies, the potential dark side of this advancement is equally dramatic.  Nick Bostrom has enumerated some of the ways that technology may pose “existential risks” – risks to the future of the human race – as the next decades and centuries unfold.  And there is also rich potential for other, less extreme sorts of damage.  Technologies like AI, synthetic biology and nanotechnology could run amok in dangerous and unpredictable ways, or could be utilized by unethical human actors for predictably selfish and harmful human ends.

The Singularity, or something like it, is probably near – and the outcome is radically uncertain in almost every way.  How can we, as a culture and a species, deal with this situation?  One possible solution is to build a powerful yet limited AGI (Artificial General Intelligence) system, with the explicit goal of keeping things on the planet under control while we figure out the hard problem of how to create a probably positive Singularity.  That is: to create an “AI Nanny.”

The AI Nanny would forestall a full-on Singularity for a while, restraining it into what Max More has called a Surge, and giving us time to figure out what kind of Singularity we really want to build and how.  It’s not entirely clear that creating such an AI Nanny is plausible, but I’ve come to the conclusion it probably is.  Whether or not we should try to create it – that is the Zillion Dollar Question.

The Gurus’ Solutions

What does our pantheon of futurist gurus think we should do in the next decades, as the path to Singularity unfolds?

Kurzweil has proposed “fine-grained relinquishment” as a strategy for balancing the risks and rewards of technological advancement.  But it’s not at all clear this will be viable, without some form of AI Nanny to guide and enforce the relinquishment.  Government regulatory agencies are notoriously slow-paced and unsophisticated, and so far their decision-making speed and intelligence aren’t keeping up with the exponential acceleration of technology.

Further, it seems a clear trend that as technology advances, it is possible for people to create more and more destruction using less and less money, education and intelligence.  There seems no reason to assume this trend will reverse, halt or slow.  This suggests that, as technology advances, selective relinquishment will prove more and more difficult to enforce.  Kurweil acknowledges this issue, stating that “The most challenging issue to resolve is the granularity of relinquishment that is both feasible and desirable” (p. 299, The Singularity Is Near), but he believes this issue is resolvable.  I’m skeptical that it is resolvable without resorting to some form of AI Nanny.

Eliezer Yudkowsky has suggested that the safest path for humanity will be to first develop “Friendly AI” systems with dramatically superhuman intelligence.  He has put forth some radical proposals, such as the design of self-modifying AI systems with human-friendly goal systems designed to preserve friendliness under repeated self-modification; and the creation of a specialized AI system with the goal of determining an appropriate integrated value system for humanity, summarizing in a special way the values and aspirations of all human beings. However, these proposals are extremely speculative at present, even compared to feats like creating an AI Nanny or a technological Singularity.  The practical realization of his ideas seems likely to require astounding breakthroughs in mathematics and science – whereas it seems plausible that human-level AI, molecular assemblers and the synthesis of novel organisms can be achieved via a series of moderate-level breakthroughs alternating with “normal science and engineering.”

Bill McKibben, Bill Joy and other modern-day techno-pessimists argue for a much less selective relinquishment than Kurzweil (e.g. Joy’s classic Wired article The Future Doesn’t Need Us).  They argue, in essence, that technology has gone far enough – and that if it goes much further, we humans are bound to be obsoleted or destroyed.  They fall short, however, in the area of suggestions for practical implementation.  The power structure of the current human world comprises a complex collection of interlocking powerful actors (states and multinational corporations, for example), and it seems probable that if some of these chose to severely curtail technology development, many others would NOT follow suit.  For instance, if the US stopped developing AI, synthetic biology and nanotech next year, China and Russia would most likely interpret this as a fantastic economic and political opportunity, rather than as an example to be imitated.

My good friend Hugo de Garis agrees with the techno-pessimists that AI and other advanced technology is likely to obsolete humanity, but views this as essentially inevitable, and encourages us to adopt a philosophical position according to which this is desirable.  In his book The Artilect War, he contrasts the “Terran” view, which views humanity’s continued existence as all-important, with the “Cosmist” view in which, if our AI successors are more intelligent, more creative, and perhaps even more conscious and more ethical and loving then we are – then why should we regret their ascension, and our disappearance?  In more recent writings (e.g. the article Merge or Purge), he also considers a “Cyborgist” view in which gradual fusion of humans with their technology (e.g. via mind uploading and brain computer interfacing) renders the Terran vs. Cosmist dichotomy irrelevant.  In this trichotomy Kurzweil falls most closely into the Cyborgist camp.  But de Garis views Cyborgism as largely delusory, pointing out that the potential computational capability of a grain of sand (according to the known laws of physics) exceeds the current computational power of the human race by many orders of magnitude, so that as AI software and hardware advancement accelerate, the human portion of a human-machine hybrid mind would rapidly become irrelevant.

Humanity’s Dilemma

And so … the dilemma posed by the rapid advancement of technology is both clear and acute.  If the exponential advancement highlighted by Kurzweil continues apace, as seems likely though not certain, then the outcome is highly unpredictable.  It could be bliss for all, or unspeakable destruction – or something inbetween.  We could all wind up dead — killed by software, wetware or nanoware bugs, or other unforeseen phenomena.  If humanity does vanish, it could be replaced by radically more intelligent entities (thus satisfying de Garis’s Cosmist aesthetic) – but this isn’t guaranteed; there’s also the possibility that things go awry in a manner annihilating all life and intelligence on Earth and leaving no path for its resurrection or replacement.

To make the dilemma more palpable, think about what a few hundred brilliant, disaffected young nerds with scientific training could do, if they teamed up with terrorists who wanted to bring down modern civilization and commit mass murders.  It’s not obvious why such an alliance would arise, but nor is it beyond the pale.  Think about what such an alliance could do now – and what it could do in a couple decades from now, assuming Kurzweilian exponential advance.  One expects this theme to be explored richly in science fiction novels and cinema in coming years.

But how can we decrease these risks?  It’s fun to muse about designing a “Friendly AI” a la Yudkowsky, that is guaranteed (or near-guaranteed) to maintain a friendly ethical system as it self-modifies and self-improves itself to massively superhuman intelligence.  Such an AI system, if it existed, could bring about a full-on Singularity in a way that would respect human values – i.e. the best of both worlds, satisfying all but the most extreme of both the Cosmists and the Terrans.  But the catch is, nobody has any idea how to do such a thing, and it seems well beyond the scope of current or near-future science and engineering.

Realistically, we can’t stop technology from developing; and we can’t control its risks very well, as it develops.  And daydreams aside, we don’t know how to create a massively superhuman supertechnology that will solve all our problems in a universally satisfying way.

So what do we do?

Gradually and reluctantly, I’ve been moving toward the opinion that the best solution may be to create a mildly superhuman supertechnology, whose job it is to protect us from ourselves and our technology – not forever, but just for a while, while we work on the hard problem of creating a Friendly Singularity.

In other words, some sort of AI Nanny….

The AI Nanny

Imagine an advanced Artificial General Intelligence (AGI) software program with

  • General intelligence somewhat above the human level, but not too dramatically so – maybe, qualitatively speaking, as far above humans as humans are above apes
  • Interconnection to powerful worldwide surveillance systems, online and in the physical world
  • Control of a massive contingent of robots (e.g. service robots, teacher robots, etc.) and connectivity to the world’s home and building automation systems, robot factories, self-driving cars, and so on and so forth
  • A cognitive architecture featuring an explicit set of goals, and an action selection system that causes it to choose those actions that it rationally calculates will best help it achieve those goals
  • A set of preprogrammed goals including the following aspects:

o   A strong inhibition against modifying its preprogrammed goals

o   A strong inhibition against rapidly modifying its general intelligence

o   A mandate to cede control of the world to a more intelligent AI within 200 years

o   A mandate to help abolish human disease, involuntary human death, and the practical scarcity of common humanly-useful resources like food, water, housing, computers, etc.

o   A mandate to prevent the development of technologies that would threaten its ability to carry out its other goals

o   A strong inhibition against carrying out actions with a result that a strong majority of humans would oppose, if they knew about the action in advance

o   A mandate to be open-minded toward suggestions by intelligent, thoughtful humans about the possibility that it may be misinterpreting its initial, preprogrammed goals

This, roughly speaking, is what I mean by an “AI Nanny.”

Obviously, this sketch of the AI Nanny idea is highly simplified and idealized – a real-world AI Nanny would have all sort of properties not described here, and might be missing some of the above features, substituting them with other related things.  My point here is not to sketch a specific design or requirements specification for an AI Nanny, but rather to indicate a fairly general class of systems that humanity might build.

The nanny metaphor is chosen carefully.  A nanny watches over children while they grow up, and then goes away.  Similarly, the AI Nanny would not be intended to rule humanity on a permanent basis – only to provide protection and oversight while we “grow up” collectively; to give us a little breathing room so we can figure out how best to create a desirable sort of Singularity.

A large part of my personality rebels against the whole AI Nanny approach – I’m a rebel and a nonconformist; I hate bosses and bureaucracies and anything else that restricts my freedom.  But, I’m not a political anarchist – because I have a strong suspicion that if governments were removed, the world would become a lot worse off, dominated by gangs of armed thugs imposing even less pleasant forms of control than those exercised by the US Army and the CCP and so forth.  I’m sure government could be done a lot better than any country currently does it – but I don’t doubt the need for some kind of government, given the realities of human nature.  And I think the need for an AI Nanny falls into the same broad category.  Like government, an AI Nanny is a relatively offensive thing, that is nonetheless a practical necessity due to the unsavory aspects of human nature.

We didn’t need government during the Stone Age – because there weren’t that many of us, and we didn’t have so many dangerous technologies.  But we need government now.  Fortunately, these same technologies that necessitated government, also provided the means for government to operate.

Somewhat similarly, we haven’t needed an AI Nanny so far, because we haven’t had sufficiently powerful and destructive technologies.  And fortunately, these same technologies that apparently necessitate the creation of an AI Nanny, also appear to provide the means of creating it.

The Basic Argument

To recap and summarize, the basic argument for trying to build an AI Nanny is founded on the premises that:

1.    It’s impracticable to halt the exponential advancement of technology (even if one wanted to)

2.    As technology advances, it becomes possible for individuals or groups to wreak greater and greater damage using less and less intelligence and resources

3.    As technology advances, humans will more and more acutely lack the capability to monitor global technology development and forestall radically dangerous technology-enabled events

4.    Creating an AI Nanny is a significantly less difficult technological problem than creating an AI or other technology with a predictably high probability of launching a full-scale positive Singularity

5.    Imposing a permanent or very long term constraint on the development of new technologies is undesirable

The fifth and final premise is normative; the others are empirical.  None of the empirical premises are certain, but all seem likely to me.  The first three premises are strongly implied by recent social and technological trends.  The fourth premise seems commonsensical based on current science, mathematics and engineering.

These premises lead to the conclusion that trying to build an AI Nanny is probably a good idea.  The actual plausibility of building an AI Nanny is a different matter – I believe it is plausible, but of course, opinions on the plausibility of building any kind of AGI system in the relatively near future vary all over the map.

Complaints and Responses

I have discussed the AI Nanny idea with a variety of people over the last year or so, and have heard an abundance of different complaints about it – but none have struck me as compelling.

“It’s impossible to build an AI Nanny; the AI R&D is too hard.” – But is it really?  It’s almost surely impossible to build and install an AI Nanny this year; but as a professional AI researcher, I believe such a thing is well within the realm of possibility.  I think we could have one in a couple decades if we really put our collective minds to it.  It would involve a host of coordinated research breakthroughs, and a lot of large-scale software and hardware engineering, but nothing implausible according to current science and engineering.  We did amazing things in the Manhattan Project because we wanted to win a war – how hard are we willing to try when our overall future is at stake?

It may be worth dissecting this “hard R&D” complaint into two sub-complaints:

  • “AGI is hard”: building an AGI system with slightly greater than human level intelligence is too hard;
  • “Nannifying an AGI is hard”: given a slightly superhuman AGI system, turning it into an AI Nanny is too hard.

Obviously both of these are contentious issues.

Regarding the “AGI is hard” complaint, at the AGI-09 artificial intelligence research conference, an expert-assessment survey was done, suggesting that a least a nontrivial plurality of professional AI researchers believes that human-level AGI is possible within the next few decades, and that slightly-superhuman AGI will follow shortly after that.

Regarding the “Nannifying an AGI is hard” complaint, I think its validity depends on the AGI architecture in question.  If one is talking about an integrative, cognitive-science-based, explicitly goal-oriented AGI system like, say, OpenCog or MicroPsi or LIDA, then this is probably not too much of an issue, as these architectures are fairly flexible and incorporate explicitly articulated goals.  If one is talking about, say, an AGI built via closely emulating human brain architecture, in which the designers have relatively weak understanding of the AGI system’s representations and dynamics, then the “nannification is hard” problem might be more serious.  My own research intuition is that an integrative, cognitive-science-based, explicitly goal-oriented system is likely to be the path via which advanced AGI first arises; this is the path my own work is following.

“It’s impossible to build an AI Nanny; the surveillance technology is too hard to implement.” – But is it really?  Surveillance tech is advancing bloody fast, for all sorts of reasons more prosaic than the potential development of an AI Nanny.  Read David Brin’s book The Transparent Society, for a rather compelling argument that before too long, we’ll all be able to see everything everyone else is doing.

“Setting up an AI Nanny, in practice, would require a world government.” – OK, yes it would … sort of.  It would require either a proactive assertion of power by some particular party, creating and installing an AI Nanny without asking everybody else’s permission; or else a degree of cooperation between the world’s most powerful governments, beyond what we see today.  Either route seems conceivable.  Regarding the second cooperative path, it’s worth observing that the world is clearly moving in the direction of greater international unity, albeit in fits and starts.  Once the profound risks posed by advancing technology become more apparent to the world’s leaders, the required sort of international cooperation will probably be a lot easier to come by.  Hugo de Garis’s most recent book Multis and Monos riffs extensively on the theme of emerging world government.

“Building an AI Nanny is harder than building a self-modifying, self-improving AGI that will retain its Friendly goals even as it self-modifies.” – Yes, someone really made this counterargument to me; but as a scientist, mathematician and engineer, I find this wholly implausible.  Maintenance of goals under radical self-modification and self-improvement seems to pose some very thorny philosophical and technical problem — and once these are solved (to the extent that they’re even solvable) then one will have a host of currently-unforeseeable engineering problems to consider.  Furthermore there is a huge, almost surely irreducible uncertainty in creating something massively more intelligent than oneself.  Whereas creating an AI Nanny is “merely” a very difficult, very large scale science and engineering problem.

“If someone creates a new technology smarter than the AI Nanny, how will the AI Nanny recognize this and be able to nip it in the bud?” – Remember, the hypothesis is that the AI Nanny is significantly smarter than people.  Imagine a friendly, highly intelligent person monitoring and supervising the creative projects of a room full of chimps or “intellectually challenged” individuals.

“Why would the AI Nanny want to retain its initially pre-programmed goals, instead of modifying them to suit itself better? – for instance, why wouldn’t it simply adopt the goal of becoming an all-powerful dictator and exploiting us for its own ends?” – But why would it change its goals?  What forces would cause it to become selfish, greedy, etc?  Let’s not anthropomorphize.  “Power corrupts, and absolute power corrupts absolutely” is a statement about human psychology, not a general law of intelligent systems.  Human beings are not architected as rational, goal-oriented systems, even though some of us aspire to be such systems and make some progress toward behaving in this manner.  If an AI system is created with an architecture inclining it to pursue certain goals, there’s no reason why it would automatically be inclined to modify these goals.

Remember, the AI Nanny is specifically programmed not to radically modify itself, nor to substantially deviate from its initial goals.  One cost of this sort of restriction is that it won’t be able to make itself dramatically more intelligent via judicious self-modification.  But the idea is to pay this cost temporarily, for the 200 year period, while

“But how can you specify the AI Nanny’s goals precisely?  You can’t right?  And if you specify them imprecisely, how do you know it won’t eventually come to interpret them in some way that goes against your original intention?  And then if you want to tweak its goals, because you realize you made a mistake, it won’t let you, right?” – This is a tough problem, without a perfect solution.  But remember, one of its goals is to be open-minded about the possibility that it’s misinterpreting its goals.  Indeed, one can’t rule out the possibility that it will misinterpret this meta-goal and then, in reality, closed-mindedly interpret its other goals in an incorrect way.  The AI Nanny would not be a risk-free endeavor, and it would be important to get a feel for its realities before giving it too much power.  But again, the question is not whether it’s an absolutely safe and positive project – but rather, whether it’s better than the alternatives!

“What about Steve Omohundro’s ‘Basic AI Drives’?  Didn’t Omohundro prove that any AI system would seek resources and power just like human beings?” – Steve’s paper is an instant classic, but his arguments are mainly evolutionary.  They apply to the case of an AI competing against other roughly equally intelligent and powerful systems for survival.  The posited AI Nanny would be smarter and more powerful than any human, and would have, as part of its goal content, the maintenance of this situation for 200 years (200 obviously being a somewhat arbitrary number inserted for convenience of discussion).  Unless someone managed to sneak past its defenses and create competitively powerful and smart AI systems, or it encountered alien minds, the premises of Omohundro’s arguments don’t apply.

“What happens after the 200 years is up?” – I have no effing idea, and that’s the whole point.  I know what I want to happen – I want to create multiple copies of myself, some of which remain about like I am now (but without ever dying), some of which gradually ascend to “godhood” via fusing with uber-powerful AI minds, and the rest of which occupy various intermediate levels of transcension.  I want the same to happen for my friends and family, and everyone else who wants it.  I want some of my copies to fuse with other minds, and some to remain distinct.  I want those who prefer to remain legacy humans, to be able to do so.  I want all sorts of things, but that’s not the point – the point is that after 200 years of research and development under the protection of the AI Nanny, we would have a lot better idea of what’s possible and what isn’t than any of us do right now.

“What happens if the 200 years pass and none of the hard problems are solved, and we still don’t know how to launch a full-on Singularity in a sufficiently reliably positive way?” – One obvious possibility is to launch the AI Nanny again for a couple hundred more years.  Or maybe to launch it again with a different, more sophisticated condition for ceding control (in the case that it, or humans, conceive some such condition during the 200 years).

“What if we figure out how to create a Friendly self-improving massively superhuman AGI only 20 years after the initiation of the AI Nanny – then we’d have to wait another 180 years for the real Singularity to begin!”– That’s true of course, but if the AI Nanny is working well, then we’re not going to die in the interim, and we’ll be having a pretty good time.  So what’s the big deal?  A little patience is a virtue!

“But how can you trust anyone to build the AI Nanny?  Won’t they secretly put in an override telling the AI Nanny to obey them, but nobody else?” – That’s possible, but there would be some good reasons for the AI Nanny developers not to do that.  For one thing, if others suspected that the AI Nanny developers had done this, some of these others would likely capture and torture the developers, in an effort to force them to hand over the secret control password.  Developing the AI Nanny via an open, international, democratic community and process would diminish the odds of this sort of problem happening.

“What if, shortly after initiating the AI Nanny, some human sees some fatal flaw in the AI Nanny approach, which we don’t see now.  Then we’d be unable to undo our mistake.” Oops.

“But it’s odious!!” – Yes, it’s odious.  Government is odious too, but apparently necessary.  And as Winston Churchill said, “democracy is the worst form of government except all those other forms that have been tried.”  Human life, in many respects, is goddamned odious.  Nature is beautiful and cooperative and synergetic — and also red in tooth and claw.  Life is wonderful, beautiful and amazing — and tough and full of compromises.  Hell, even physics is a bit odious – some parts of my brain find the Second Law of Thermodynamics and the Heisenberg Uncertainty Principle damned unsatisfying!  I wouldn’t have written this article when I was 22, because back then I was more steadfastly oriented toward idealistic solutions – but now, at age 44, I’ve pretty well come to terms with the universe’s persistent refusal to behave in accordance with all my ideals.  The AI Nanny scenario is odious in some respects, but can you show me an alternative that’s less odious and still at least moderately realistic?  I’m all ears….

A Call to Brains

This article is not supposed to be a call to arms to create an AI Nanny.  As I’ve said above, the AI Nanny is not an idea that thrills my heart.  It irritates me.  I love freedom, and I’m also impatient and ambitious – I want the full-on Singularity yesterday, goddamnit!!!

But still, the more I think about it, the more I wonder whether some form of AI Nanny might well be the best path forward for humanity – the best way for us to ultimately create a Singularity according to our values.  At very least, it’s worth very serious analysis and consideration – and careful weighing against the alternatives.

So this is more of a “call to brains”, really.  I’d like to get more people thinking about what an AI Nanny might be like, and how we might engineer one.  And I’d like to get more people thinking actively and creatively about alternatives.

Perhaps you dislike the AI Nanny idea even more than I do.  But even so, consider: Others may feel differently.  You may well have an AI Nanny in your future anyway.  And even if the notion seems unappealing now, you may enjoy it tremendously when it comes to pass.


Ben Goertzel Ph.D. is a fellow of the IEET, and founder and CEO of two computer science firms Novamente and Biomind, and of the non-profit Artificial General Intelligence Research Institute (agiri.org).
Print Email permalink (17) Comments (6495) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Once again.. The solution to “all” of the future problems and pitfalls for humanity lies with the development, implementation, and emergence of the online Global Brain/Mind?

What better way to develop an “ethical nanny”, than to perpetually crowd source the entire online collective aggregate of human minds? The collective will already comprise Human consciousness, ethics and wisdom and dilemmas will be self correcting?

Furthermore, this solution is non-exclusive, even the Pope can participate!

For me, this solution is as clear as day, and getting brighter by the moment! How about you? Can you see what I see? Can you see it?





Brilliant article Ben, it’s refreshing to read from people who understand the gravity of our situation.

First, my complaints with your Nanny AI:

1) Too slow. I simply don’t think you’ll build it in time. For example, while you’re figuring out how to build the Nanny AI, someone else is building an AGI as well. This is a real coin toss. I’d imagine that we’d see several competing human level AGI’s emerge around the same time, and some of them will be warlike. Your Nanny AI won’t necessarily be the most powerful or resourceful.

2) Chaos. No matter how cleverly designed your Nanny AI is, I believe there are always anomalies that you cannot account for. This merely stems from an intuitive trust in the fundamental chaotic nature of reality.

Now, on to my solutions:

1) Global Brain. Develop social media faster. Connect the whole human race. Build the Noosphere now! A Global Village can potentially respond to crisis and threats much faster.

2) Abundance and Transparency. Abundance can be looked at from two perspectives: one, it reduces the probability that anyone will desire to unleash destructive technologies. Why blow something up when you have everything you need? No more disparity. Scarcity is the root of conflict, eliminate scarcity and the only problem we have to deal with are a low percentage of the population that are biologically damaged (mental illness/sociopathy, etc.) With accelerating technology, even these people can be identified and healed before they become a threat (perhaps even from birth - or the traits can be genetically modified out - or the symptoms can be detected so fast, perhaps through portable brain scans - etc. etc. etc.)
The other way to look at it is in the Brave New World sense. Wire head people into passivity. Give them food and shelter and clothing and permanently wire their heads to be in a state of ecstasy, if for no other reason than to keep everyone out of the way while the Singularity is being prepared.
Whichever way you choose to look at it (I choose the more positive sense - but the ominous BNW perspective seems to work too), abundance immediately relieves hostility.

Transparency. A requirement of the Global Mind solution. Sousveillance. If we want to do this in the most peaceful, hedonic way we can, it’s time for total global transparency as a catalyst for the other things we need, abundance and global mind.

I feel these things would have a higher chance of success than trying to race the rest of the world to be the first to create a Nanny AI.





An AI Nanny is the only way. I am counting on it. Without superintelligent AIs controlling affairs the human race is doomed.

But I don’t envisage AI nannies forestalling the S. They will accelerate it. The S cannot be anything but positive, because it is about intelligence thus it will be intelligent. A negative S would be stupid thus not really a S. Pre-S could be dangerous because stupid people unaware of S consequences could think pre-S existence is eternal thus they act in stupid pre-S ways.

Humans are too stupid to figure anything out thus a restrained “surge” would be futile and mediocre.

We need things to grow so very quickly that that pre-S idiots won’t have too much time to cause chaos.

Hugo de Garis and others are paranoid. Super intelligent beings beyond scarcity will have absolutely no need or desire to “obsolete” humans. People such as Hugo have not grasped Post-Scarcity, they haven’t grasped the S.

Friendly AI is a silly concept. AI at human level will be similar to humans, some will be good and some bad. Beyond human intelligence friendliness will directly increase in relation to increasing intelligence, any alternative would be stupid. Utopia is inevitable but the interim period could be painful (waiting amidst morons).

This is an oxymoron: “A strong inhibition against modifying its preprogrammed goals”, because such a constricted entity would not be capable of real intelligence. Free thought, freethinking, freedom is essential for intelligence.

Strong inbuilt inhibitions will not create super-intelligence. What you need to do is build an intelligent being without giving it any specific rules; and then you simply ask it to help us if it feels like helping us. It seems I have a different concept of AI nanny. Think about it. What sort of nanny would it be if it was forced to follow the rules of its children?





Ever watch a movie called “Colossus: The Forbin project”?





Hazards opposing the usefulness of the Global Brain/Mind

Below may not be news to some, although it has only recently come to my attention. The reason in posting this is to make known and examine the potential dangers, (that already have existed for some time!), concerning increased online connectivity. Problems that oppose the implementation and extension and usefulness of the Global Brain/Mind.

It goes without saying that these types of clandestine and covert online activities are very difficult, if not impossible to trace and track in real-time, rendering surveillance and detection presently difficult to impossible?

So what is the solution?

The risk, hazard and dilemma already exists now, and needs to be addressed. There is already will to action to overcome these problems, we are not talking of any future speculation or negative consequence of connectivity or opposition to usefulness of the Global Brain/Mind.

I would still propose that the emerging use of Supercomputing, incorporating increased speed and bandwidth of processing for detection is a viable measure to tackle this type of criminality? That, together with some smart computer brains to design A.I algorithms to aid detection of Dark net hacks and addresses?

That there is indeed, absolutely no reason to doubt the usefulness of the emerging Global Brain/Mind.


Dark Internet

“A dark Internet or dark address refers to any or all unreachable network hosts on the Internet.

The dark Internet should not be confused with either deep web or darknet. Whereas deep web and darknet stand for hard-to-find websites and secretive networks that sometimes span across the Internet, the dark Internet is any portion of the Internet that can no longer be accessed through conventional means

Failures within the allocation of Internet resources due to the Internet’s chaotic tendencies of growth and decay are a leading cause of dark address formation. One of the leading causes of dark addresses is military sites on the archaic MILNET. These government networks are sometimes as old as the original Arpanet, and have simply not been incorporated into the Internet’s changing architecture. It is also speculated that hackers utilize malicious techniques to hijack private routers to either divert traffic or mask illegal activity. Through use of these private routers a dark Internet can form and be used to conduct all manner of misconduct on the Internet.”

>> http://en.wikipedia.org/wiki/Dark_Internet


The dark side of the internet

“In the ‘deep web’, Freenet software allows users complete anonymity as they share viruses, criminal contacts and child pornography

” Fourteen years ago, a pasty Irish teenager with a flair for inventions arrived at Edinburgh University to study artificial intelligence and computer science. For his thesis project, Ian Clarke created “a Distributed, Decentralised Information Storage and Retrieval System”, or, as a less precise person might put it, a revolutionary new way for people to use the internet without detection. By downloading Clarke’s software, which he intended to distribute for free, anyone could chat online, or read or set up a website, or share files, with almost complete anonymity.

“There’s a well-known crime syndicate called the Russian Business Network (RBN),” says Craig Labovitz, chief scientist at Arbor Networks, a leading online security firm, “and they’re always jumping around the internet, grabbing bits of [disused] address space, sending out millions of spam emails from there, and then quickly disconnecting.”

The RBN also rents temporary websites to other criminals for online identity theft, child pornography and releasing computer viruses. The internet has been infamous for such activities for decades; what has been less understood until recently was how the increasingly complex geography of the internet has aided them. “In 2000 dark and murky address space was a bit of a novelty,” says Labovitz. “This is now an entrenched part of the daily life of the internet.” Defunct online companies; technical errors and failures; disputes between internet service providers; abandoned addresses once used by the US military in the earliest days of the internet – all these have left the online landscape scattered with derelict or forgotten properties, perfect for illicit exploitation, sometimes for only a few seconds before they are returned to disuse. How easy is it to take over a dark address? “I don’t think my mother could do it,” says Labovitz. “But it just takes a PC and a connection. The internet has been largely built on trust.”

>> http://www.guardian.co.uk/technology/2009/nov/26/dark-side-internet-freenet

 





suppossedly an AI would be better because 1.) it is logical - not subject to the whims of darwinian design 2.) it has faster processing speed and a larger information database.

I have no doubt that #2 will be correct, but is it possible that humans can create something logical when we are anything but?

In any system of ethics you have to define what is good and what is bad - how can you get the world to agree on these definitions?

How reluctant are present humans to defining an overt system of ethics?

I don’t see many wanting to promote or challenge the Abolitionist directive;

“the abolition of involuntary suffering/death in favor of the infinite increase in voluntary happiness/vitality”

so will they promote it when AI is ‘created’?

I’m sure humans will continue to focus on short sighted and expedient means that further their individual pursuit of happiness however blind it may be.

Presently we have fringe technologies developed in secret, AI will continue to be developed by the elite without input from the masses and for the whims of the privileged few.

Even with a world government you won’t have a democratic and open military industrial intelligence complex.

This is due to human nature.





You can read my criticism of this proposal here. In short, dreams of superhuman patriarchs legitimate the existing structures of oppression.





@ Alan Grimes: Colossus vs. Guardian. That was my first thought as well…

The technology will likely appear on several fronts at about the same time, some of them possibly secret.





Not sure about a nanny, but certainly some kind of ‘quick ‘n dirty’ limited solution to buy us some time is likely needed.  As you say, we’ll be waiting 100 years for grand schemes such as CEV, and we’ll be long dead by then my friend.

Transhumanists spend far too much time in their own heads pondering grand schemes, I know why, the young transhumanist folks are mostly nerds that think their immortal and the world’s concerns don’t apply to them, but as you say in the article, once you hit your 40s you realize your time is running out real fast.

Transhumanists will never live to see any of the grand transhumanist schemes reach fruition, unless swift action is taken to alleviate our dire predicament (aging ).  Transhumanists spend all this time in their heads, and all the while the world is totally passing them by.  While pondering all sorts of exotic threats, all the while transhumanists were getting ‘outflanked’ by the mundane, the banal (aging, heart disease, cancer).  while pondering all sorts of highly complex and abstract grand schemes (e.g., CEV) all the while transhumanists failed to have a Plan B. 

Remember my Hacker’s Maxims:

Hacker’s Maxim #3:  ‘Always Have a Plan B’
Hacker’s Maxim #4: ‘Always Cover Your Flank’

The AGI Nanny constitutes a reasonable ‘near-mode’ possibility for a Plan B and a Flank Covering exercise to buy us the extra time we need to usher in the grander ‘far-mode’ schemes.





“but as you say in the article, once you hit your 40s you realize your time is running out real fast.”

Yes and in your 50s you can’t ignore that realization.





http://www.artistserver.com/member/index.cfm/a/9587/blog/2792

it’s no longer about computers but NETWORKS of computers- and the CLOUD is ALREADY an exaflop computer [there are 10^9 active mobile devices with 10^9 flops each for 10^18 flops- also there are 10^8 PCs with 10^10 flops each- another exaflop]! it is just in the process of compiling- it is transitioning from sharing data to sharing computation resources
consider this- the HUMAN network of memes and culture and conscious memory is a biological network of 10^9 brains each with about 10^16 ops- although the bandwidth between brains is very low: symbolic language- the whole neural supernetwork is about 10^25 ops

now consider that by the late 2020s our mobile devices will each be hitting the same power of the brain: 10^16 flops - petaflop mobiles and devices-

but AT THE SAME TIME at least basic BCI with more bandwidth than symbolic language between humans [at least basic audiovisual AR/VR and natural language recognition/basic brainwave reading] will for the first time allow the biological network of humans to cybernetically connect with the machine network/cloud at the level of each human/device

but when the two networks connect- they will be EQUIVALENT- in size/complexity the human network of memes and neurons is 10^25 ops- and as they connect the machine network will be 10^25 flops [10^16 flops times 10^9 devices]!

the fact that all indications show that the PLANETARY networks - the biological and the machine- will MERGE into a cybernetic whole JUST as they are EQUAL is just too big a coincidence my friends

this is the synchronicity of synchronicities- a clear sign of a teleology at work- a planetary self-organizing principle






http://chronicle.com/article/From-Technologist-to/128231/

“To a young undergraduate, frustrated with the lack of rapid progress on tough philosophical questions, AI seemed like the great hope, the panacea—the escape from the frustrations of thinking. If we human beings are such feeble thinkers, perhaps philosophy is best not left to human beings. We could instead just build better thinkers—artificially intelligent machines—and they could answer our questions for us…Over time, it became increasingly hard to ignore the fact that the artificial intelligence systems I was building were not actually that intelligent. They could perform well on specific tasks; but they were unable to function when anything changed in their environment. I realized that, while I had set out in AI to build a better thinker, all I had really done was to create a bunch of clever toys—toys that were certainly not up to the task of being our intellectual surrogates…We were not, and are not, on the brink of a breakthrough that could produce systems approaching the level of human intelligence.”





Regardless of one’s stance on the issue, I think everyone will find this interesting:

http://www.popsci.com/technology/article/2011-09/yale-law-journal-ponders-wisdom-ibm-robot-watson-judge

The Yale Law Journal’s Betsy Cooper wrote an essay examining our favorite Jeopardy! champion (and new medical diagnoser) robot Watson, but from a new angle: Could Watson help judges make legal decisions?

The essay notes that Watson could be of particular use to a certain type of judge or legal scholar: the new textualists. She writes: “New textualists believe in reducing the discretion of judges in analyzing statutes. Thus, they advocate for relatively formulaic and systematic interpretative rules. How better to limit the risk of normative judgments creeping into statutory interpretation than by allowing a computer to do the work?”
Says Cooper, “there are three important elements of new textualism: its reliance on ordinary meaning (the premise), its emphasis on context (the process), and its rejection of normative biases (the reasoning).” From that vantage point, Watson wouldn’t be so much a judge (much as we’d love to see a massive black judge’s robe draped over Watson’s storage array) as an assistant or clerk, using its power to decide, for example, what the most “ordinary” use of a word is. Humans have to rely on instinct and experience, but Watson can systematically measure that sort of thing, narrowing down the possible meanings of words to eliminate uncertainty.

Watson also has the advantage of not being able to insert his own emotions or opinions into his decisions, by virtue of the fact that, well, he doesn’t have any. Cooper does conclude that, due to his occasional errors (we’d hate to sentence criminals to serve time in Toronto) and the more basic fact that perhaps there should be a human element to judging, Watson is not an ideal candidate to actually make the bench. But that doesn’t mean he couldn’t be tremendously useful in legal decisions.





My comments are added at the end of these quotes and links..

Collective intelligence

“Collective intelligence is a shared or group intelligence that emerges from the collaboration and competition of many individuals and appears in consensus decision making in bacteria, animals, humans and computer networks.

The idea emerged from the writings of Douglas Hofstadter (1979), Peter Russell (1983), Tom Atlee (1993), Pierre Lévy (1994), Howard Bloom (1995), Francis Heylighen (1995), Douglas Engelbart, Cliff Joslyn, Ron Dembo, Gottfried Mayer-Kress (2003) and other theorists. Collective intelligence is referred to as Symbiotic intelligence by Norman Lee Johnson.[1] The concept is relevant in sociology, business, computer science and mass communications: it also appears in science fiction, frequently in the form of telepathically-linked species and cyborgs.”

David Skrbina cites the concept of a ‘group mind’ as being derived from Plato’s concept of panpsychism (that mind or consciousness is omnipresent and exists in all matter). He develops the concept of a ‘group mind’ as articulated by Thomas Hobbes in “Leviathan” and Fechner’s arguments for a collective consciousness of mankind. He cites Durkheim as the most notable advocate of a ‘collective consciousness” and Teilhard de Chardin as a thinker who has developed the philosophical implications of the group mind.

Atlee and Pór suggest that the field of collective intelligence should primarily be seen as a human enterprise in which mind-sets, a willingness to share and an openness to the value of distributed intelligence for the common good are paramount, though group theory and artificial intelligence have something to offer. Individuals who respect collective intelligence are confident of their own abilities and recognize that the whole is indeed greater than the sum of any individual parts. Maximizing collective intelligence relies on the ability of an organization to accept and develop “The Golden Suggestion”, which is any potentially useful input from any member. Groupthink often hampers collective intelligence by limiting input to a select few individuals or filtering potential Golden Suggestions without fully developing them to implementation.

>> http://en.wikipedia.org/wiki/Collective_intelligence


World Wide Brain - Ben Goertzel

“The emergence of the global Web mind will, I believe, mark a significant turning-point in the history of intelligence on Earth. On a less cosmic level, it will obviously also play an important role in the specific area of computing. It will bring together two strands in computer science that have hitherto been mostly separate: artificial intelligence and networking. As we watch the initial stages of the global Web mind emerge, the interplay between the Net and AI will be intricate, subtle and beautiful. In the global Web mind, both networking and artificial intelligence reach their ultimate goals.”

>> http://www.goertzel.org/papers/webart.html


As a counter point to the above, below is a short piece by Jaron Lanier, although I do not agree with his conclusions it is well written and still is worthy of contemplation.

Beware the Online Collective

“What’s to stop an online mass of anonymous but connected people from suddenly turning into a mean mob, just like masses of people have time and time again in the history of every human culture? It’s amazing that details in the design of online software can bring out such varied potentials in human behavior. It’s time to think about that power on a moral basis.”

>> http://www.edge.org/3rd_culture/lanier06/lanier06.2_index.html


And the answer to above?

Mob behaviours and tribalism will still be cancelled out by the self-correcting online Global collective - remember, size matters? If you cannot see this viewpoint, then you need to take a step further back and focus on the wood not the trees?

This is not to say that “each individual” will not have effect on the Global collective consciousness. It is individual expression and creativity, participation and cooperation in the online collective that is fundamentally important to ensure progress and sharing of ideas and wisdom, and ensure that mobs and tribalism, selfishness and criminality, frauds and corruption and etc are effectively nulled by the collective majority?

For sure, selfishness, frauds and criminality will always exist, as long as humans seek to take advantages and pursue self gratification. Free-loaders and associated memes will persist and continue to prosper, fundamentalism, extremism and political radicals and misanthropy will still endure; yet none of these will cause significant damage to the collective whole.

Polarised viewpoints may ebb and flow through causality, (political viewpoints may change overnight and be inspired by party politics and propaganda), yet the overall aggregate of majority interaction will effect a balance and equilibrium. And this is despite the negatives of mass ignorance, or democratic “mob rule” as some would have it, (although an “informed” democratic process, whereby citizens are given the right information to make an informed decision is most preferable, and if political process is carried out in earnest, this can be achieved).

Once again, you may feel that this argument is merely moot and pure speculation, yet today’s media and news channels are in fact highly interactive, and encourage discussion, participation and feedback through social media and email. Politicians are being held accountable in real time across news networks, news celebrities are now contactable directly, and even encourage contact and feedback. Even Barack Obama uses Twitter to disseminate political opinion and information. Increased transparency and accountability of governance and politicians is the way forward.

To conclude, people are not as dumb and indifferent as most governments would like them to be, or expect them to be, and most would actively participate in the governance of themselves and their communities if they were encouraged and empowered to do so? And if, they indeed felt, that their opinions were valued and counted towards progress and change?


The more pertinent questions facing the emergence of the Global Brain/Mind must begin with asking..

1. Do nation state governments really want to encourage and develop Global initiatives towards collective resourcing and participation, and thus encourage increased individual responsibility through social contract? As this means effectively promoting greater democratic power to citizens and relinquishing at least some governance to vote by proxy? Although ultimately, this would be to their advantage, (not that they would necessarily view it this way however), I can’t see much progress yet in increased democratic referendum.

2. Assuming that nation state governments would wish to encourage Global collective resourcing, or even national collective resourcing, (and I do not believe this is the case today), how would they encourage this participation and cooperation from citizens, (use of a carrot is obviously more effective)? How would they implement this strategy, (incremental would seem best)? How could citizens themselves help to promote increased awareness and participation for progression towards a “Big society”? Is it time for citizens to begin making real demands of their governments towards an increased democratic process?

3. Ultimately there will still be a need for a centralised Global government and authority to oversee jurisdiction and administrate for contingency against catastrophe and existential risks, and arbitrate in disagreements between nation states, (whatever those may be?), and I would foresee the UN playing an even greater role in Global governance, (comprising a federation of nation state governments, as is mostly the case today). This does not necessarily mean a need for an increase in bureaucracy, international laws and regulations however, (which seems to be the case today?)

With citizens holding increased democratic powers in the governance of their nation states, and our present nation state governments relegated to “administrators” of their respective collectives and for the negotiation and implementation of central Global initiatives, Global governance and regulations may be downsized and streamlined to suit and cater for the acceptance of all cultures and to ensure that basic human needs are met globally, (increased bureaucracy and regulation appears to hinder and slow progress towards egalitarianism?)

Yet still the same question begs, would the UN in fact be prepared to seek to empower Global citizens to think and act, and pursue increased democracy, or would it merely wish to continue to support the status quo and the current democratic participation of nation state central governments?

What has all of this to do with an A.I Nanny? - I still don’t think we need one!





Well, I am somewhat surprised to see some of my intuitions concord with your exposition, and the AI Nanny idea, in my opinion, has been bouncing around from some time (mostly in SF, of course); the most striking example being the Multivac concept by the “good Doctor”, Asimov. Maybe some people clearly mix up the concept of the “nanny” with relinquish a piece of their liberty, that true, but, in the other hand, for example: Why are there so many people doing research about prosthetics? because some humans need them right?. They have some form of disability and we want to compensate their lives a bit. Now, we asume that the human mind is foolproof? NO, and it would be naive to think of it as such. The human mind can create an astounding amount of tools; why don´t create one that at least could tell us “Careful, you are getting a bit offhand on this or that”; and that idea is not new either. Dr. Stafford Beer tried to implement such “expert system” (to call it somehow) in Chile in the 70´s. Maybe a wrong time in a wrong moment? Could be. But his later works about VSM and Syntegrity applied to public office maybe would give some guidelines about what we NEED. That something like the AI Nanny is too complex to build in a reasonable amount of time? I have my doubts about the “impossible”; Internet and the WWW WAS something impossible about fifty years ago. I consider myself a Technoprogressive, and watching some polititians at “work”, I can only think in “Politics is too serious a matter to entrust to politicians alone”





No one doubts that the unthinkable can be created, the issue for us is to ensure that it is ethical, given human nature - liberty and individual responsibility is an important [critical] check and balance against the stupidity inherent to our design.

Without changing our fundamental design, we are creating tools that amplify our darwinian drives.

What is valuable is not something that can be objectively quantified according to a hierarchy - that is a fallback to darwinian evolution (alpha male and such.)

The freedom to pursue happiness is what is valuable - it requires informed consent and subjective ‘free-will’.

———

With government corruption as it is, and the well documented history of the government encroaching upon our liberties with involuntary experimentation and spying, not to mention keeping the power and truth of fringe science from the masses - no one should trust an AI hierarchy - because it will be of darwinian nature and for darwinian nature - and controlled by the elite who are the least qualified to determine reality for individuals.





I came across something today that made me chuckle.

To further my contention that these kinds of control remedies won’t work, because they won’t be fast or extensive enough.

A video of Ben Goertzel talking about his OPEN SOURCE project, OpenCog posted at K21st.

http://k21st.wordpress.com/2011/09/23/open-source-ai-engine-agi-11-conference/

Ben, you’re doing an OPEN SOURCE AGI project, so how do you think that this sort of thing is going to remain in control of any single centralization focused type of institution/organization!?

The genie is out of the bottle. It’s likely some kind of Singleton is going to emerge.

Perhaps the only thing we can do (hopefully) is decide how we’re going to adapt to it’s emergence (merge or not).

I don’t think there is time for any kind of bureaucratic solution.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Eight Reasons Not to Raise the Age for Medicare Eligibility

Previous entry: Breast Implant Blowout: Failure to Follow Up & Lack of Informed Consent

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376