Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Singularity 1 on 1: Quantum Thief Trilogy

Stanford Laptop Orchestra (1hr 30min)

The Nature of Categories and Concepts (1hr 30min)

Enhancing Virtues: Caring (part 2)

On Steven Pinker’s “The Better Angels of our Nature”

Cyberwarfare ethics, or how Facebook could accidentally make its engineers into targets


ieet books

Superintelligence: Paths, Dangers, Strategies
Author
by Nick Bostrom


comments

Rick Searle on 'How our police became Storm-troopers' (Aug 31, 2014)

instamatic on 'How our police became Storm-troopers' (Aug 31, 2014)

Rick Searle on 'How our police became Storm-troopers' (Aug 31, 2014)

instamatic on 'How our police became Storm-troopers' (Aug 31, 2014)

Rick Searle on 'How our police became Storm-troopers' (Aug 31, 2014)

instamatic on 'How our police became Storm-troopers' (Aug 31, 2014)

Rick Searle on 'How our police became Storm-troopers' (Aug 31, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Transhumanism and Marxism: Philosophical Connections

Sex Work, Technological Unemployment and the Basic Income Guarantee

Technological Unemployment but Still a Lot of Work…

Hottest Articles of the Last Month


Enhancing Virtues: Self-Control and Mindfulness
Aug 19, 2014
(7959) Hits
(0) Comments

Is using nano silver to treat Ebola misguided?
Aug 16, 2014
(6815) Hits
(0) Comments

“Lucy”: A Movie Review
Aug 18, 2014
(5852) Hits
(0) Comments

High Tech Jainism
Aug 10, 2014
(5343) Hits
(5) Comments



IEET > Rights > Neuroethics > Personhood > Life > Enablement > Affiliate Scholar > Phil Torres

Print Email permalink (5) Comments (11870) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Group Intelligence, Enhancement, and Extended Minds


Phil Torres
Phil Torres
Ethical Technology

Posted: Jan 19, 2011

Virtually all talk of cognitive enhancement focuses exclusively on the enhancement of individual intelligence. But what about enhancing group intelligence?


It is increasingly common for individuals to work in collaborative groups rather than alone. One finds this trend salient in a growing number of domains, from business and management to government and media to science and academia. (Even in the humanities, an increasing proportion of research papers are being co-authored.)

But what do we know about the nature of such collaborative work? What is the relation between the capacities of individuals and the capacities of the group? And how might cognitive enhancement technologies amplify the abilities of groups to solve the problems they’re confronted with?

In a fascinating paper published in Science entitled “Evidence for a Collective Intelligence Factor in the Performance of Human Groups” (2010), Dr. Anita Williams Woolley and her colleagues (yes, the research was collaborative!) make two important discoveries:

1. They find that there is such a thing as collective intelligence. What is collective intelligence? Well, it’s the analogue of general intelligence, or IQ, except it exists at the level of the group rather than the individual. What’s useful about IQ - our best measure of an individual’s general intellectual ability - is that it can be used to predict how an individual will perform in a number of different cognitive domains. (That’s why it’s called “general” intelligence!) Using the same basic approach for quantifying individual general intelligence, Woolley et al find that groups exhibit a similar property that is both measurable and allows for accurate predictions of how a group will perform on a range of cognitive tasks.

2. The researchers find some quite intriguing - and counterintuitive - correlations between properties at the level of the individual and the level of the group. For example, one might “pre-theoretically” think that group intelligence is a function of the average intelligence of that group’s members. And one might “pre-theoretically” think that a group with a single exceptional individual would have a higher group IQ than one with, say, three above average but non-exceptional members.

However, Woolley and her colleagues find only a statistically weak correlation between the intelligence of groups and these two member-level properties. In other words, it’s not possible to accurately predict how well groups will perform on a range of cognitive tasks simply by averaging the IQs of its members, or by noting a single exceptional individual within the group. These features aren’t linked - or at least not robustly - to group IQ, despite what intuition might suggest.

The Key to Smarter Groups

What, then, determines how smart a group of collaborating individuals is? The researchers find three individual-level features that correlate in a statistically significant way to collective intelligence.

putting our heads togetherFirst, the greater the social sensitivity of group members, the smarter the group. Second, the more turn-taking within the group, the better the group performs. And third, the more women in the group, the higher the group IQ. For any reader who works on projects in groups, this is good information to know!

(By the way, what might gender have to do with group IQ? Well, the researchers surmise that groups with more women are smarter because women tend to be more socially sensitive than men. Thus, the gender factor is real but indirect - that is, it’s mediated by the property of social sensitivity.)

What I find most interesting about the paper, though, at least from the perspective of transhumanism, is the following remark at the end: “More importantly, it would seem to be much easier to raise the intelligence of a group than an individual. Could a group’s collective intelligence be increased by, for example, better electronic collaboration tools?”

This is interesting because virtually all talk of “cognitive enhancement” in the literature today focuses exclusively on the enhancement of individual intelligence. But what about group intelligence? Maybe there are technologies or strategies for cognitive enhancement that are applicable on the group rather than individual level? The authors themselves suggest, in the quote above, that increasing the information-sharing abilities of group members using “electronic collaborative tools” might enhance the intelligence of the group itself (without necessarily increasing the intelligence of individual group members).

Artificial Enhancement of Groups

But there’s also the more speculative possibility, not mentioned by Woolley et al, of enhancing the social sensitivity of group members. What would happen if group members took, for instance, a pharmaceutical of some sort that enabled them to be more socially sensitive towards each other? What if some sophisticated technology were available that augmented the individual’s ability to better listen to the ideas of others - to let others have time to speak and to be intellectually open to opposing views?

This would involve enhancement at the individual level, of course, but it would be explicitly aimed at enhancing the intelligence of groups. Furthermore, such enhancement would not really be cognitive in nature, since it would target the psychological and emotional capacities of individuals, such as one’s capacity for social sensitivity, rather than one’s general intelligence.

Indeed, given Woolley’s findings, it appears that increasing the raw intelligence of individual group members cannot guarantee a smarter group. A group of cognitively enhanced individuals with extremely high IQs (because of their enhancement) thus might fail to outperform a group of “normals” if those “normals” prove to be more socially sensitive than their enhanced rivals.

I take this to be a nontrivial claim because: (a) “radical” cognitive enhancements will almost certainly become widely available in the near future, and (b) as noted earlier, there is a pervasive trend towards increasingly collaborative work in nearly all domains of human activity. Maybe the best way to improve the enterprises of science, government, business, and so on, would thus be to focus on enhancing group intelligence - a goal that may or may not have much to do with enhancing individual intelligence.

(Incidentally, there seems to be a point at which the above generalizations no longer hold. If, for example, a group consisted of three “normals” and one “posthuman” whose cognitive capacities exceeded our own in the way our own capacities exceed those of, say, a mouse, then this group might indeed outperform other groups with greater social sensitivity. And this group would outperform others simply by virtue of the profound intellectual abilities of a single exceptional member, namely the ultraintelligent posthuman.)

Group Mind vs. Extended Mind

One final thought on this issue: I find it extremely interesting - as well as perplexing - to think about how Woolley’s group intelligence research might relate to the “extended mind thesis.” What is this thesis? Basically (to make a long story very short), the central component of the extended mind thesis is called the Parity Principle. It states that “if, as we confront some task, a part of the world functions as a process which, were it to go on in the head, we would have no hesitation in accepting as part of the cognitive process, then that part of the world is (for that time) part of the cognitive process.”

Thus, according to the Parity Principle, inanimate objects like a pad of paper, a calculator, a computer, Wikipedia, an iPhone, and so on, can all, under just the right conditions, constitute a literal component of one’s cognitive system - of one’s mind. But, one might wonder, what’s stopping one from considering other brains to be literal components of one’s extended mind too?

There are, indeed, many people who come to rely on the knowledge of others - spouses, friends, etc. - in exactly the same way that they rely on Wikipedia, their iPhones, and their own internal brain structures like the hippocampus. It follows, therefore, that another mind can indeed become a feature of one’s own cognitive system (on the condition that the Parity Principle is true - obviously, one might reject this principle).

But how does this phenomenon relate to the idea that groups themselves can possess a kind of general intelligence? Imagine, for example, a group consisting of three people. Over time, each member of the group comes to rely on the knowledge had by the other two members. The minds of each member are thus gradually extended beyond the arbitrary boundaries of “skin and skull” to include the other individuals in the group. What they end up with, then, are three extended minds, each of which subsumes the entire group.

In addition to these minds, though, there is the cognitive property that Woolley et al term “collective intelligence.” Such intelligence “emerges” from the interaction of the groups members, and consequently it too subsumes all three members. Thus: What exactly is the relation between an extended mind that includes a whole group of individuals and the collective intelligence of the whole group itself?* And furthermore, if groups themselves can have cognitive properties like individuals, is there any possibility of extending the “mind” of a group? What would this entail and how might it work?

Obviously, these are research questions involving two cutting-edge research programs (one focusing on group intelligence and one on extended minds). Although I don’t have anything insightful to say in response to these questions right now, articulating them is the first step in any new research project! (What do readers think?)

* Of course, it’s also possible that either one of these ideas - the group IQ hypothesis or the extended mind thesis - are wrong, which would make the conundrum described here a mere “pseudo-problem.” Some problems need only be dissolved, rather than solved.


Print Email permalink (5) Comments (11871) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


I have been pondering a general problem with group intelligence, and cognitive enhancement for a little while and I am confronted with a problem which I normally brush off as a “gut” reaction to futurism.

I think there is a great deal of evidence that at the very least collaborative work, and consumption, as powered by telecom and the internet is the main reason behind the exponential increase of our knowledge over the last hundred years. Certainly much of that had at the very least with the ability to share knowledge in much shorter periods of time, if not by actual collaboration between parties who would have never had the ability to meet 100 years ago, much less work together.

So for continuing our advancement, and the well being of our species, cognitive enhancement, is a natural and likely inevitable end. In a great many ways, cognitive enhancement would be the single greatest achievement in human history.

My concern is as I wonder how it would develop, while certainly it would be most helpful to have the ability to communicate with one and other faster and further without the help of external electronic devices, and instead have a mind more capable of calculation, cognition and communication, eventually we will want to share experiences. While reading a report does allow one to get all the “science” of an experiment, it does prevent them from experiencing it. As we continue to unlock the mind, and the way we can interface with it, I believe it will eventually get to the point where we are able to single out specific memories, and access them with 100% reliability. This could be cool, but here are the two things that have been concerning me about the issue of being able to access ones memories at will.

1: Our minds are made in such a way that trauma, and negative experience is slowly buried up, or forgotten. Our minds do seem designed with self preservation measures to try and protect our psyche. Now with a memory that is always accurate, and that is always accessible, what will that do to our minds? My concern is what our limitations add to our selves. I am unsure of what the world would be like if I didn’t forget things. There are somethings we choose to forget, or ignore, or believe despite the evidence. Our emotions do seem somewhat disconnected from our experiences, especially as time goes on. Stockholm Syndrome is a wonderful example, despite the worst possible conditions a loyalty and an affection grows between a captor and their captive. Are these mechanisms of neural chemistry, or are these not caused by our physical neural infrastructure? While I don’t think anyone is a supporter of Stockholm Syndrome, on subjects like parental affection, this may be problematic. This bothers me, And I would ask for help, and thoughts about what that would mean for our enhanced existence.

The second issue that has been haunting me, is one of privacy. In a world where I can transmit my memories into others, certainly there would be a vast array of benefits. Imprisoning the innocent would be almost unheard of, scientific advancement would advance at an unprecedented rate, collaborative culture and consumption would increase, and I believe violence as a species would fall. This all comes though at the expense of privacy, and I mean total and complete privacy. What if anything could be called more private than ones memories or experiences? Your first kiss? Your first love? Your first heartbreak? Secrets, shames and prides? Much of our world is based on our private secrets, from the mundane (excuses about tardiness, or polite lies to appease our peers) to the extreme (People who change their life, their name, try to escape their past, or anyone who’s work or life depends on information remaining secret). With the ability to share memories, or worse, to forcibly access others memories, this wonderful world enhancement will help us build, may be utterly devoid of privacy. A world where nothing is sacred, except knowledge, and that you may no longer own your own life. Simply, everyone’s life, everyone’s knowledge and everyone’s experiences, may simply become public domain. This may not be a bad thing, for mankind to become some kind of pseudo-hive mind, or at least a pluralistic neural network, but are we capable and daring enough to forgo one of our most revered rights?

I would really appreciate peoples feed back on this, I am trying very hard to reconcile this with my strong desire to never die and live in a computer.

Thanks
Gynn (NYC)





Great article and comment points to ponder.

Perhaps the exacting recollection of negative experiences and the elimination of mind privacy would be the least of our worries in a future collective existence. Will the human mind (albeit vastly enhanced in the future) have the emotional and intellectual stability to thrive outside a biological template?





Turn-taking within the group is not an individual feature.

I proved the increase in group intelligence long ago, but it appears that social factors - e.g., bosses not being willing to give up their power to dominate the group - prevented widespread acceptance:

Stodolsky, D. (1987). Dialogue management program for the Apple II computer. Behavior Research Methods, Instruments, & Computers, 19, 483-484.

http://dss.secureid.org/stories/storyReader$21





For me it’s not absolutely certain that a hive mind would necessarily always act as just the melting pot of all the individual minds within it. If an ultraintelligent mind found that there were advantages to not completely coalescing all minds with in it we might find that individuality - at least to some extent - might actully survive in some form within an intelligence that we would otherwise consider a hive mind.





atoms don’t seem to loose their ‘individuality’ when forming molecules… might it not work the same for individual people and the shared intelligence(s) they might choose?





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: What a year at SciCheer

Previous entry: Three Steps to Outliving Death Itself

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
Williams 119, Trinity College, 300 Summit St., Hartford CT 06106 USA 
Email: director @ ieet.org     phone: 860-297-2376