Homesteading a Society of Mind
Jamais Cascio
2011-03-01 00:00:00
URL

The researchers' initial version of the program gave the robot a way of watching and analyzing its own body, so that it could more readily adapt to new conditions (such as losing a limb). The next version, however, was much more ambitious:

Now, instead of having robots modeling their own bodies, Lipson and Juan Zagal have developed ones that essentially reflect on their own thoughts. They achieve such thinking about thinking, or metacognition, by placing two minds in one bot. [...] By reflecting on the first controller's actions, the second one could make changes to adapt to failures... In this way the robot could adapt after just four to ten physical experiments instead of the thousands it would take using traditional evolutionary robotic techniques.


Book coverThey refer to this system of having one controller analyze another as "metacognition," but what immediately came to mind for me was Marvin Minsky's description of a "Society of Mind" -- the idea that the conscious mind is an emergent process resulting from multiple independent sub-cognitive processes working in parallel.

This piece at MIT gives a better overview of the Society of Mind argument than the Wikipedia stub, including this quote from a Minsky essay on the concept:

The mind is a community of "agents." Each has limited powers and can communicate only with certain others. The powers of mind emerge from their interactions for none of the Agents, by itself, has significant intelligence. [...] In our picture of the mind we will imagine many "sub-persons", or "internal agents", interacting with one another. Solving the simplest problem-seeing a picture-or remembering the experience of seeing it-might involve a dozen or more-perhaps very many more-of these agents playing different roles. Some of them bear useful knowledge, some of them bear strategies for dealing with other agents, some of them carry warnings or encouragements about how the work of others is proceeding. And some of them are concerned with discipline, prohibiting or "censoring" others from thinking forbidden thoughts.


Clearly, a two-"agent" robot mind isn't quite a real "society of mind" -- it's more like a "neighborly acquaintance of mind." Nonetheless, it shows an obvious direction for further research, as well as offering interesting support for Minsky's idea.

It also echoes something I wrote in 2003, for the Transhuman Space: Toxic Memes game book. In discussing why AI "infomorphs" weren't significantly smarter than humans, I offered up this:

Despite their different material base, human minds and AI minds are remarkably similar in form. Both display consciousness as an emergent amalgam of subconscious processes. For humans, this was first suggested well over a century ago, most famously in the work of Marvin Minsky and Daniel Dennett, and proven by the notorious Jiap Singh "consciousness plasticity" experiments of the 2030s. [..] In the same way, nearly all present-day AI infomorphs use an emergent-mind structure made up of thousands of subminds, each focused on different tasks. There is no single "consciousness" system; thought, awareness, and even sapience emerge from the complex interactions of these subprocesses. Increased intellect... is the result of increasingly complex subsystems.


We're still a ways away from declaring this a successful predictive hit, but it's amusing nonetheless.