All Kinds of Potential Minds
Anne Corwin
2006-12-04 00:00:00
URL

While recognizing that each existing person is quite unique even in the midst of an assumed-normative range of neurological and biological characteristics, it does seem to be true that most people hold some personal notion regarding the nature of a "normal" mind, and that most people do have difficulty recognizing the potential validity of minds that differ sufficiently from their own. Some of this is probably rooted in the lack of a coherent definition of "mind" in the first place (or at least, a widely-accepted, concise, parametric definition) -- that is, a person might avoid classifying a given piece of phenomena they encounter as a "mind" not due to any kind of prejudice or lack of imagination, but due to the fact that they themselves simply don't know what sorts of phenomenological data point to a mind "being there" at all.

The question, from the perspective of someone encountering a phenomenon that could potentially fall into the category of "mind", commonly ends up being a rather utilitarian one: if a mind exists in one's vicinity, different obligations are owed to that mind than would be owed to an inanimate object (though it should be recognized that there could very well be minds in the "potential mind-space" that would actually prefer to be treated as one might treat a toaster -- we simply don't know!). A mind-possesing entity will, conceivably have some set of native demands; at the very least, it will require resources -- material, energy, informational, what have you.

So, when the "known" mind (say, a present-day human person) encounters a given piece of unfamiliar phenomena, the question of whether this phenomenon has a mind, or is a mind, becomes pertinent, particularly when the human is faced with issues of resource scarcity (perceived or actual) or information need (since the identification of a particular thing as a mind could end up being tremendously important if, say, the human is in the midst of attempting to solve a problem that the local set of available minds has found too perplexing to make much headway on).

Humans, at present, do not have much in the way of sophisticated means of (a) detecting minds, or (b) establishing the validity of an identified mind. Humans tend to rely not only on behavioral signals, but on typical behavioral signals -- many of which can be culture-typical.

For instance, an American who makes little eye contact might be considered "shifty" or untrustworthy -- or possibly autistic, and many nonautistic humans have difficulty acknowledging the validity of autistic cognition. However, a person who makes less eye contact is born into a culture where direct eye contact is considered rude or presumptuous would not stand out, even if his or her instinctual predilection is not to make eye contact based on neurological factors.

If the variation in behavior is limited in this example to that of eye contact (or lack thereof), it seems quite clear that while a mind might be recognized in both cultures as existing, that mind will only be considered fully valid in the country where direct eye contact is not the social norm.

When you add in other behavioral variables -- responsiveness (or lack thereof) to verbalizations from other persons in the vicinity, presence or absence of typically-communicative speech, apparent ability to carry out what are considered "basic" life skill functions -- you end up with a very complex set of external factors that are commonly used in the assessment of a person's inner life, even when they may have little to no correlation with how the person actually perceives and experiences reality. It amazes me that people who have no problem believing that they'd be able to recognize a manifestation of artificial intelligence could have trouble accepting the idea of a person who does not speak or look anyone in the eye, but who is also fully capable of experiencing a complex inner life.

If AI were to emerge "inside a computer", would this AI only be recognized as valid if it created an avatar that looked human, and that provided culture-typical human responses (perhaps in the context of a two-way conversation, in which the avatar on the screen interacted with the programmer as one might with someone on a video-conferencing system)? Or, is it critical (as I believe it is) for humans to avoid narrowing the scope of "potential mind-space", perhaps to the exclusion of a myriad of valid minds, some of which might communicate in ways that would not be recognized as typical?

One issue I find quite intriguing is the question of whether the existence of a mind necessarily means that that mind is capable of communication with other minds -- and if so, how can we work on sorting deliberate communication from random (or perhaps, more properly, pseudorandom) phenomena?

A less behaviorally-based means of determining the presence and nature of minds is imperative -- or at least, the spectrum of what is to be considered purposeful and/or meaningful behavior must be widened considerably. This doesn't mean assuming everything anything does is purposeful or indicative of a mind (or a valid mind), but rather, seeking out and acknowledging "proof of concept" examples of cases in which existent minds do not conform to the expected behavioral manifestation that would typically be expected to correspond with a mind of particular complexity.