Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Overview of technopolitics


whats new at ieet

Blockchain – The Building Blocks for a New Society, with Vince Meens

Martine Rothblatt, Lawrence Krauss, Douglas Rushkoff, Russell Blackford endorse Science Missionaries

Trump Picks Establishment Banker For Treasury Secretary

Trump’s Pick for Health Secretary Is Total Nightmare Fuel

First Republican “Hamilton Elector” Breaks Ranks Against Trump

Cybathlon 2016 : entre sport, handicap et transhumanisme


ieet books

TECHNOPROG, le transhumanisme au service du progrès social
Author
Marc Roux and Didier Coeurnelle





JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life


IEET > Rights > Personhood > GlobalDemocracySecurity > Cyber > Vision > Fellows > Jamais Cascio > Futurism

Print Email permalink (2) Comments (6028) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Homesteading a Society of Mind


Jamais Cascio
By Jamais Cascio
Open the Future

Posted: Mar 1, 2011

Scientific American reports about research at Cornell’s Computational Synthesis Laboratory intended to give robot minds a degree of “self-awareness.” Is this a signpost on the road to machine consciousness?

The researchers’ initial version of the program gave the robot a way of watching and analyzing its own body, so that it could more readily adapt to new conditions (such as losing a limb). The next version, however, was much more ambitious:

Now, instead of having robots modeling their own bodies, Lipson and Juan Zagal have developed ones that essentially reflect on their own thoughts. They achieve such thinking about thinking, or metacognition, by placing two minds in one bot. [...] By reflecting on the first controller’s actions, the second one could make changes to adapt to failures… In this way the robot could adapt after just four to ten physical experiments instead of the thousands it would take using traditional evolutionary robotic techniques.

Book coverThey refer to this system of having one controller analyze another as “metacognition,” but what immediately came to mind for me was Marvin Minsky’s description of a “Society of Mind”—the idea that the conscious mind is an emergent process resulting from multiple independent sub-cognitive processes working in parallel.

This piece at MIT gives a better overview of the Society of Mind argument than the Wikipedia stub, including this quote from a Minsky essay on the concept:

The mind is a community of “agents.” Each has limited powers and can communicate only with certain others. The powers of mind emerge from their interactions for none of the Agents, by itself, has significant intelligence. [...] In our picture of the mind we will imagine many “sub-persons”, or “internal agents”, interacting with one another. Solving the simplest problem-seeing a picture-or remembering the experience of seeing it-might involve a dozen or more-perhaps very many more-of these agents playing different roles. Some of them bear useful knowledge, some of them bear strategies for dealing with other agents, some of them carry warnings or encouragements about how the work of others is proceeding. And some of them are concerned with discipline, prohibiting or “censoring” others from thinking forbidden thoughts.

Clearly, a two-“agent” robot mind isn’t quite a real “society of mind”—it’s more like a “neighborly acquaintance of mind.” Nonetheless, it shows an obvious direction for further research, as well as offering interesting support for Minsky’s idea.

It also echoes something I wrote in 2003, for the Transhuman Space: Toxic Memes game book. In discussing why AI “infomorphs” weren’t significantly smarter than humans, I offered up this:

Despite their different material base, human minds and AI minds are remarkably similar in form. Both display consciousness as an emergent amalgam of subconscious processes. For humans, this was first suggested well over a century ago, most famously in the work of Marvin Minsky and Daniel Dennett, and proven by the notorious Jiap Singh “consciousness plasticity” experiments of the 2030s. [..] In the same way, nearly all present-day AI infomorphs use an emergent-mind structure made up of thousands of subminds, each focused on different tasks. There is no single “consciousness” system; thought, awareness, and even sapience emerge from the complex interactions of these subprocesses. Increased intellect… is the result of increasingly complex subsystems.

We’re still a ways away from declaring this a successful predictive hit, but it’s amusing nonetheless.


Jamais Cascio is a Senior Fellow of the IEET, and a professional futurist. He writes the popular blog Open the Future.
Print Email permalink (2) Comments (6029) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


The moral, loving observer self behind the rational, self-indulgent ego. Sounds a lot like something that many people think doesn’t exist.





A double minded man is unstable in all of his ways.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Is ‘Spirituality’ Necessary?

Previous entry: The Happiest Species on the Planet

HOME | ABOUT | STAFF | EVENTS | SUPPORT  | CONTACT US
JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Executive Director, Dr. James J. Hughes,
35 Harbor Point Blvd, #404, Boston, MA 02125-3242 USA
Email: director @ ieet.org