Whole Brain Emulation as a Building Block for AGI - Promises and Pitfalls?
Gareth John
2016-02-20 00:00:00

So, if I understand it correctly, whole brain emulation is one approach to building a platform for (hopefully) Friendly AGI. It is widely discussed in artificial intelligence research publications and works on the premise that computer-based intelligence such as an upload could think much faster than a biological human even if it were no more intelligent. [1] If we assume that cognitive functionalism holds true as a theory — the idea that our brains are a kind of computer — there are two very promising approaches worth pursuing.



One strategy is the rules-based approach as, for example, promoted by Ben Goertzel. According to Wikipedia, Ben actively promotes the OpenCog project that he co-founded, which aims to build an open source artificial general intelligence engine. He is focused on creating benevolent superhuman artificial general intelligence and applying AI to areas like financial prediction, bioinformatics, robotics and gaming. [2] He’s written a shedload of books, technical papers and journalistic articles and watching his many videos posted on YouTube I can’t help but like the guy. Unfortunately, much of his output is a little over my head, but nonetheless I’ll try to sum up my understanding of what he’s attempting to do, so that readers can put me right if needed.



The rules-based approach seems to involve a cognitive science approach of hard-coding to figure out the algorithms of intelligence and the ways that they're intricately intertwined. Goertzel defines intelligence as the ability to detect patterns in the world and in the agent itself. His hypothesis is that one can create a ‘baby-like’ artificial intelligence first, and then raise and train this agent in a simulated or virtual world such as Second Life to produce a more powerful intelligence. To quote:



Knowledge is represented in a network whose nodes and links carry probabilistic truth values as well as ‘attention values,’ with the attention values resembling the weights in a neural network. Several algorithms operate on this network, the central one being a combination of a probabilistic inference engine and a custom version of evolutionary programming. [Goertzel] claimed that this combination is able to avoid the combinatorial explosions that both these algorithms suffer from when exposed to large problems. [3]



Now if I’m correct, this approach is to reverse-engineer the learning structure of a brain and let it learn and experience on its own. As a result it will actually develop its own brain structure which will be similar to a human brain. I’ll return to this idea in a bit.



The second approach seems to be focused on reverse engineering of the human brain itself. Neuroscientists say there's no reason to believe that we can't model this structure ourselves. Here, whole brain emulation implies the re-creation of all the brain's properties in an alternative substrate, namely a computer system, creating a 1-to-1 model where all relevant properties of the system exist.



Many eminent computer scientists and neuroscientists have predicted that specially programmed computers will be capable of thought and even attain consciousness, including Koch and Tononi [4], Douglas Hofstadter [5], Marvin Minsky [6], Randall A. Koene [8], and Rodolfo Llinas [9]. Vardis over on Fractal Futures suggests that, ‘if we could scan human brains in sufficient detail, we could transfer a human personality to an AGI. But that's optional.’ [10] Shulman and Salamon argue that if whole brain emulation came before artificial general intelligence, this might help humanity to navigate the arrival of AGI with better odds of a positive outcome. The mechanism for this would be that safety-conscious human researchers could be ‘uploaded’ onto computing hardware and, as uploads, solve problems relating to AGI safety before the arrival of AGI. They summarise their position thus: 



‘Business as usual’ is not a stable state. We will progress from business as usual to one of four states that are stable: stable totalitarianism, controlled intelligence explosion, uncontrolled intelligence explosion, or human extinction. (E.g., “Go extinct? Stay in that corner.”)  The question, then, is “whether the unstable state of whole brain emulation makes us more or less likely to get to the [stable] corners that we [favor]. Is our shot better or worse with whole brain emulation?’ [11]



My questions then, are:



  1. 1. Could the former method (the rules-based approach) backfire. If its aim is to figure out the algorithms of intelligence and the ways that they're intricately intertwined in order to train the ‘baby-like’ AI (albeit in virtual environments) whose brain/s do they use to map the learning process? We all learn in different ways, so how can we be sure that the brain/s used for emulation do not have flaws in this very process that could lead to disaster?

  2. 2. My question concerning the latter method seems to me to be based upon an even more complex and potentially hazardous supposition. Bearing in mind the ‘if I’m right’ affixed to my thoughts, if whole brain emulation requires reverse engineering a brain/brains then the uncertainty as to where this would lead would surely be even greater. The big question for me remains: whose brain/brains do we use? There is no-one without flaws in their mental reasoning, decision making and learning processes and the idea that an AGI could somehow ‘fix’ these problems would rely on the ability for it to see the flaws as problems in the first place. Having bipolar disorder, the last thing you would want would be me holding the fate of the world in my hands. Hell, I can barely use my iPhone and my mood swings would, I feel, make for a very dangerous Singularity.



So, are these a dumb questions? If either approach to reverse-engineering the brain in order to build the platform for AGI wins out, what exactly will the AGI 'be' - at least to begin with? A ‘blank state’ that learns as it goes along, where we have to query the assumption that its learning process is unhampered by human foibles in coding it? Or a reliance on someone’s brain to seed the AGI and if so, whose? 



Or is it (as I suspect) somewhat more complicated than that?



Notes.



  1. 1. Goertzel, Ben (December 2007). http://www.goertzel.org AI_Journal_Singularity_Draft.pdf

  2. 2. https://en.wikipedia.org/wiki/Ben_Goertzel

  3. 3. https://en.wikipedia.org/wiki/Ben_Goertzel

  4. 4. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=4531463

  5. 5. http://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity

  6. 6. Marvin Minsky, Conscious Machines, in 'Machinery of Consciousness', Proceedings, National Research Council of Canada, 75th Anniversary Symposium on Science in Society, June 1991.

  7. 7. http://arxiv.org/abs/1504.06320

  8. 8. Llinas, R (2001). I of the Vortex: From Neurons to Self. Cambridge: MIT Press. pp. 261–262. ISBN 0-262-62163-0.

  9. 9. http://forum.fractalfuture.net/t/whole-brain-emulation-and-creating-agi-whos-brain/737/5

  10. 10. Anna Salamon; Luke Muehlhauser (2012). "Singularity Summit 2011 Workshop Report" (PDF). Machine Intelligence Research Institute. Retrieved 28 June 2014. 

  11. 11. Anna Salamon; Luke Muehlhauser (2012). "Singularity Summit 2011 Workshop Report" (PDF). Machine Intelligence Research Institute. Retrieved 28 Jun 2014