“Here’s the posthuman rub: We are expanding our control into a vast number of realms that we previously had no choice but to submit to, stoically or otherwise.” - Erik Davis,
Take The Red Pill
Interesting times lie ahead for all of us as new technologies (from revolutionary pharmaceuticals to future neuro-modifying nanobots) allow people to alter their internal operating systems at finer and finer levels. We’re already facing the first vestiges of the kinds of deep existential questions that go along with this increased level of control, as a result of the proliferation of everything from antidepressants to drugs with the potential to alter learning processes in Down’s syndrome.
Neurology is in the news a lot these days, and no wonder: our brains are us and we’ve got all sorts of vested interests in understanding ourselves, our neighbors, and any children that might happen to come along. Sometime over the past few years (or decades, perhaps, depending on what technologies you’re looking at), the brain has somehow managed to drift from being the seat of immutable personal destiny to being something far more, and far more quickly, changeable. But changeable in response to what? And to what extent do a person’s “initial conditions” truly affect the choices they end up making, and the metagoals and supergoals held as primary?
Each of must now deal, on practically a daily basis, with new questions of what it means to be who we are (and how self-modification might figure into that process). In addition, people who desire to become parents, or who are already parents, are being faced with new challenges as far as producing and raising members of the next generation goes.
For instance, there’s a father blogging on Wired right now, in a series entitled, Hacking My Child’s Brain. While I don’t have any ethical problems with sensory integration therapy provided with the intention of helping someone learn to gain better awareness of how the sensory landscape affects them (and how to better cope with overwhelming stimuli), I do think it’s important to focus on how any such therapy might make the child’s life intrinsically better as opposed to how it might “normalize” their responses and reactions to the world and other people.
The father writing this blog says some very reasonable things (he points out how schools sometimes wrongly accuse students with sensory issues of deliberately acting up), but at the same time, he seems to be making some rather odd assumptions regarding how his son experiences the world (e.g., talking about “demanding” eye contact, and equating typical emotional expressions with the actual feeling of a particular emotion, which is not always an accurate pairing—especially when dealing with someone who is atypically-wired to begin with).
Things are likely to get even more convoluted, ethics-wise, once more dramatic means of brain-alteration emerge, through such tools as nanotechnology. Chris Phoenix, over at Responsible Nanotechnology recently wrote an article entitled Exploring Nano-Ethics. While I agreed with the premise and most of the conclusions in this article (i.e., that people shouldn’t need to prove they are “diseased” in some respect in order to have the right to modify themselves in some way), there was one statement that set off my “unexamined bias” flags (emphasis mine):
Incautious or excessive amplification of human traits may lead to situations not dissimilar from drunkenness, mania, or even autism.
The fundamental problem here is, as I see it, in seeking to define autism as a deviation from something that should exist by default, rather than a legitimate configuration in its own right. Autism is not a transitory “state”, it’s more of what I’d analogize to an operating system. Every operating system has its adherents—some people prefer Windows, some prefer Linux, but that doesn’t mean that either operating system is a defective or exaggerated version of the other.
I’m not being a voracious relativist here; I am just pointing out that there is such a thing as a set of mutually exclusive yet equally valid configurations. This, of course, is not the same thing as claiming ALL configurations are valid—I don’t want Alzheimer’s or cancer and I don’t know anyone else who does, either, but I know of plenty of people who are fine with being autistic and don’t feel slighted or sad for having been born that way.
Some people, including some of the commenters who responded to me on the Responsible Nanotechnology article I replied to, seemed fine with the idea that autistic thought processes and cognition could be functional and even beneficial in some contexts. However, both in that discussion and in others I’ve been involved in at times, there’s a kind of sentiment that autistic cognition is best thought of as a tool for accomplishing certain specialized tasks, rather than something people can or should exist within the framework off all the time.
While I understand how a person could come to this way of thinking, based on the current cultural climate, I find the argument that autism is somehow bad if it’s “unintentional” to be insulting—just as, say, a black person might find it insulting to hear that, well, black people are fine, but nobody deserves to be bornblack because that will restrict their freedom somehow from the get-go (and I know that neurology and skin color aren’t really analogous, but attitudes surrounding both certainly are).
It’s a subtle kind of prejudice, and one that many people are probably unaware of having until the dominant culture changes to the degree that it is revealed—try listening to a few old radio programs or reading a few old magazines from the 1950s, and see how women and minorities are talked about. It’s likely much of it would sound very racist and sexist by today’s standards, but it didn’t to most people at the time. I’m certainly not suggesting that we need to try to “preserve” minority groups through the kinds of policies that, say, white-separatist groups would favor—but rather, that whenever an assumption exists that a given group is inferior for whatever reason, this assumption needs very close examination. And people need to be allowed to perform these examinations, and to question fashionable assumptions, without being accused of overzealous and gratuitous “political correctness”.
For example, there seems to be a very pervasive implicit assumption that neurotypicality represents a state of maximum “choice”—when in fact, this is an illusion. The suggestion that autism should never be “unintentional” is, simultaneously, a statement to the effect of, “Only nonautistic people are qualified to determine whether or not being autistic is okay or not”. Which is, of course, a very patronizing statement, not to mention one quite revelatory of the sense of illusion-of-choice that members of any majority tend to exhibit. When there are a lot of people like you around, it’s a lot easier to see more of your behaviors and preferences as willful and intentional, and therefore the behaviors of people less like you as evidence of constraint or pathology.
A hypothetical case in point: You’re doing something (e.g., watching a soccer match) because you want to, but the autistic person is doing something else (e.g., lining up blocks or drawing detailed and realistic pictures) due to “obsession” or a “savant skill”.
I’m not trying to go off on a tangent on “free will” versus determinism (though I consider myself a
compatabilist, for the record) but rather, pointing out that when viewing a configuration different from yours as inherently constraining, it is important to look at and try to recognize your own constraints and not let them become invisible to you just because the dominant culture accomodates them so readily.
Some might respond to my arguments here with the statement that, “if we CAN make someone more ‘normal’, we should, because though there are some positive aspects to certain kinds of differences, differences can also make a person suffer, and it’s not fair to make people suffer just so we’ll have more diversity in the world.”
But what this assumption misses is the fact that each and every person has challenges in life, and that it is really only a false sense of security and desire for certainty that causes a person to shudder less at the idea of typical challenges (finding a prom date, resisting peer pressure to binge-drink at a college party) than at atypical ones (learning to manage hypersensitive hearing, not emitting or easily translating standard body language).
I imagine a transhuman/posthuman culture of tremendous diversity and tremendous accomodation of, and recognition of, different ways of being. Certainly, it is best for people to be able to self-determine to the greatest degree possible—and I agree that caution is in order when embarking on experimental journeys of self-modification and cognitive engineering—but I also think that we need to be very wary of bias, and avoid attempting to define a single “best possible baseline state” from which to start these journeys.