Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

The Future of Robotic Automated Labor

Consciousness and Neuroscience

Fusion: “Posthuman” - 3D Printed Tissues and Seeing Through Walls!

Philosopher Michael Lynch Says Privacy Violations Are An Affront To Human Dignity

Transhumanism: The Robot Human: A Self-Generating Ecosystem

Indefinite Life Extension and Broader World Health Collaborations (Part II)


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

Kris Notaro on 'The Future of Robotic Automated Labor' (Oct 25, 2014)

instamatic on 'Why “Why Transhumanism Won’t Work” Won’t Work' (Oct 24, 2014)

Abolitionist on 'Is using nano silver to treat Ebola misguided?' (Oct 24, 2014)

cacarr on 'Book review: Nick Bostrom's "Superintelligence"' (Oct 24, 2014)

jasoncstone on 'Ray Kurzweil, Google's Director Of Engineering, Wants To Bring The Dead Back To Life' (Oct 22, 2014)

pacificmaelstrom on 'Why “Why Transhumanism Won’t Work” Won’t Work' (Oct 21, 2014)

rms on 'Smut in Jesusland: Why Bible Belt States are the Biggest Consumers of Online Porn' (Oct 21, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Google’s Cold Betrayal of the Internet
Oct 10, 2014
(7535) Hits
(2) Comments

Dawkins and the “We are going to die” -Argument
Sep 25, 2014
(5734) Hits
(21) Comments

Should we abolish work?
Oct 3, 2014
(5175) Hits
(1) Comments

Will we uplift other species to sapience?
Sep 25, 2014
(4609) Hits
(0) Comments



IEET > Rights > Personhood > Contributors > Athena Andreadis

Print Email permalink (21) Comments (3556) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


The Persistent Neoteny of Science Fiction


Athena Andreadis
By Athena Andreadis
Starship Reckless

Posted: Jan 20, 2012

“Science fiction writers, I am sorry to say, really do not know anything. We can’t talk about science, because our knowledge of it is limited and unofficial, and usually our fiction is dreadful.”  – Philip K. Dick

When Margaret Atwood stated that she does not write science fiction (SF) but speculative literature, many SF denizens reacted with what can only be called tantrums, even though Atwood defined what she means by SF. Her definition reflects a wide-ranging writer’s wish not to be pigeonholed and herded into tight enclosures inhabited by fundies and, granted, is narrower than is common: it includes what I call Leaden Era-style SF that sacrifices complex narratives and characters to gizmology and Big Ideas.

By defining SF in this fashion, Atwood made an important point: Big Ideas are the refuge of the lazy and untalented; works that purport to be about Big Ideas are invariably a tiny step above tracts. Now before anyone starts bruising my brain with encomia of Huxley, Asimov, Stephenson or Stross, let’s parse the meaning of “a story of ideas”. Like the anthropic principle, the term has a weak and a strong version. And as with the anthropic principle, the weak version is a tautology whereas the strong version is an article of, well, religious faith.

The weak version is a tautology for the simplest of reasons: all stories are stories of ideas. Even terminally dumb, stale Hollywood movies are stories of ideas. Over there, if the filmmakers don’t bother with decent worldbuilding, dialogue or characters, the film is called high concept (high as in tinny). Other disciplines call this approach a gimmick.

The strong version is similar to supremacist religious faiths, because it turns what discerning judgment and common sense classify as deficiencies to desirable attributes (Orwell would recognize this syndrome instantly). Can’t manage a coherent plot, convincing characters, original or believable worlds, well-turned sentences? Such cheap tricks are for heretics who read books written in pagan tongues! Acolytes of the True Faith… write Novels of Ideas! This dogma is often accompanied by its traditional mate, exceptionalism – as in “My god is better than yours.” Namely, the notion that SF is intrinsically “better” than mainstream literary fiction because… it looks to the future, rather than lingering in the oh-so-prosaic present… it deals with Big Questions rather than the trivial dilemmas of ordinary humans… or equivalent arguments of similar weight.

I’ve already discussed the fact that contemporary SF no longer even pretends to deal with real science or scientific extrapolation. As I said elsewhere, I think that the real division in literature, as in all art, is not between genre and mainstream, but between craft and hackery. Any body of work that relies on recycled recipes and sequels is hackery, whether this is genre or mainstream (as just one example of the latter, try to read Updike past the middle of his career). Beyond these strictures, however, SF/F suffers from a peculiar affliction: persistent neoteny, aka superannuated childishness. Most SF/F reads like stuff written by and for teenagers – even works that are ostensibly directed towards full-fledged adults.

Now before the predictable shrieks of “Elitist!” erupt, let me clarify something. Adult is not a synonym for opaque, inaccessible or precious. The best SF is in many ways entirely middlebrow, as limpid and flowing as spring water while it still explores interesting ideas and radiates sense of wonder without showing off about either attribute. A few short story examples: Alice Sheldon/James Tiptree’s A Momentary Taste of Being; Ted Chiang’s The Story of Your Life; Ursula Le Guin’s A Fisherman of the Inland Sea; Joan Vinge’s Eyes of Amber. Some novel-length ones: Melissa Scott’s Dreamships; Roger Zelazny’s Jack of Shadows; C. J. Cherryh’s Downbelow Station; Donald Kingsbury’s Courtship Rite. Given this list, one source of the juvenile feel of most SF becomes obvious: fear of emotions; especially love in all its guises, including the sexual kind (the real thing, in its full messiness and glory, not the emetic glop that usurps the territory in much genre writing, including romance).

SF seems to hew to the long-disproved tenet that complex emotions inhibit critical thinking and are best left to non-alpha-males, along with doing the laundry. Some of this comes from the calvinist prudery towards sex, the converse glorification of violence and the contempt for sensual richness and intellectual subtlety that is endemic in Anglo-Saxon cultures. Coupled to that is the fact that many SF readers (some of whom go on to become SF writers) can only attain “dominance” in Dungeons & Dragons or World of Warcraft. This state of Peter-Pan-craving-comfort-food-and-comfort-porn makes many of them firm believers in girl cooties. By equating articulate emotions with femaleness, they apparently fail to understand that complex emotions are co-extensive with high level cognition.

Biologists, except for the Tarzanist branch of the evo-psycho crowd, know full well by now that in fact cortical emotions enable people to make decisions. Emotions are an inextricable part of the indivisible unit that is the body/brain/mind and humans cannot function well without the constant feedback loops of these complex circuits. We know this from the work of António Damasio and his successors in connection with people who suffer neurological insults. People with damage to that human-specific newcomer, the pre-frontal cortex, often perform at high (even genius) levels in various intelligence and language tests – but they display gross defects in planning, judgment and social behavior. To adopt such a stance by choice is not a smart strategy even for hard-core social Darwinists, who can be found in disproportionate numbers in SF conventions and presses.

To be fair, cortical emotions may indeed inhibit something: shooting reflexes, needed in arcade games and any circumstance where unthinking execution of orders is desirable. So Galactic Emperors won’t do well as either real-life rulers or fictional characters if all they can feel and express are the so-called Four Fs that pass for sophistication in much of contemporary SF and fantasy, from the latest efforts of Iain Banks to Joe Abercrombie.

Practically speaking, what can a person do besides groan when faced with another Story of Ideas? My solution is to edit an anthology of the type of SF I’d like to read: mythic space opera, written by and for full adults. If I succeed and my stamina holds, this may turn into a semi-regular event, perhaps even a small press. So keep your telescopes trained on this constellation.


Note: This is part of a lengthening series on the tangled web of interactions between science, SF and fiction.

Previous rounds: Why SF needs……science (or at least knowledge of the scientific process): SF Goes McDonald’s — Less Taste, More Gristle
…empathy: Storytelling, Empathy and the Whiny Solipsist’s Disingenuous Angst
…literacy: Jade Masks, Lead Balloons and Tin Ears
…storytelling: To the Hard Members of the Truthy SF Club.

Images: 1st, Bill Watterson’s Calvin, who knows all about tantrums; 2nd, Dork Vader, an exemplar of those who tantrumize at Atwood; 3rd, shorthand vision of my projected anthology.


Athena Andreadis served as a fellow of the IEET from 2007 to 2009, and is an Associate Professor of Cell Biology at the University of Massachusetts Medical School, and the author of To Seek Out New Life: The Biology of Star Trek.
Print Email permalink (21) Comments (3557) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Re “one source of the juvenile feel of most SF becomes obvious: fear of emotions; especially love in all its guises, including the sexual kind”

@Athena - emotions are many, complex, and diverse. I have the impression that you are focusing only on a certain set of emotions. Arthur C. Clarke gives me emotions more powerful and pleasant than the writers you mention.





There are two kinds of emotions: the cortical type and the four Fs.  If Clarke gives you the latter, bully for him.





As an author, I am adding a hearty “yes!” to this article.

I think everyone has a list of authors or at least books that belong on the list of ‘adult’ SF books. The mainstream SF/F books are very much the way Athena describes. I have got very tired of reading detailed descriptions of gizmos that don’t exist at the expense of characters who should.

Emotions are infinitely shaded, but like physical matter are made up of a limited number of building blocks. There are only so many hormones and attendant physiological reactions to them.

Having a character feel mildly pissed off all the time without reference to the physiology of anger creates a separation between that character and the me. I experience that as being emotionally truncated.





My own favorites include Arthur Clarke, Greg Egan, Rudy Rucker, Charlie Stross, Richard Morgan.





@Athena - please define the four F?

How about my list in the previous comment? Arthur Clarke, Greg Egan, Rudy Rucker, Charlie Stross, Richard Morgan—- and I am afraid I have forgotten a few favorites, including Neal Stephenson, William Gibson, Kathleen Goonan, Linda Nagata.





Giulio—the four F’s of behavior that Athena is referring to are:

Feed, Flee, Fight and “mate.”





While on the topic of Si-fi, I’ve recently read an article on the io9 blog about reasons why post-humans are more likely want to kill us (normal humans) than AI and I recall that these reasons are based partially from Si-Fi works.  Here they are below

I. Competition for scarce resources

That’s the main reason people try to kill each other, after all. We fight over land, or access to water, or food. We fight over energy resources. We fight over precious metals. And so on.

And there will be resources that A.I.s need lots of — they’ll need durable metals, so they can build robot bodies. They’ll need silicon. They’ll need petroleum, to create plastic. Plus, perhaps most of all, they’ll need energy sources — although one would hope that by the time we have self-aware computers, we also have workable solar power or wind power or geothermal. Or at least safer nuclear power.

Why Cyborgs and Mutants are More Likely to Kill Us than RobotsBut posthumans will need all that, and a lot more. As long as they have basic human characteristics, they’ll need territory to live in and roam in. They’ll still need conventional food sources, unless they’ve somehow eliminated the need to eat altogether. They’ll need potable water, which may be a lot scarcer by then. And they’ll probably want more than just subsistence — they’ll probably want nice houses, with all the best toys and gourmet food, and all that stuff.

In other words, posthumans will want what we have. And they may well be willing to kill a lot of us to get it.
II. Revulsion for what you used to be

You always hate the thing you’ve outgrown. We look down on homo erectus. Your older sibling thinks everything you do and say is totally stupid. Multi-cellular organisms probably hate single-celled organisms. We have a certain disgust for more primitive versions of ourselves, and any resemblance just makes us more uncomfortable.

It’s just human nature — and there’s no reason to believe it’ll be different for our “improved” relatives.

There’s just an instinctive “ick” factor when you see someone who’s still displaying habits, or traits, that you’ve rejected in yourself. Or clinging to old ways. You project all of your self-loathing for your retrograde aspects onto people who display them. Imagine that you moved to the big city, or learned to use chopsticks when you eat Chinese food, and then you run into someone who never left your small town and still eats Ma P’o Tofu with a fork.

That kind of revulsion can easily turn murderous — or be used to justify a genocide that already has other root causes. Just like we dehumanize the people we want to kill, our descendants will de-posthumanize us.

Meanwhile, A.I.s will probably just think we’re quaint.
III. We’ll try to kill them

Once we realize that some humans are turning into something different and potentially scary, our first impulse will be to hunt them down and deal with them before they get too widespread and powerful. It’ll be all Days of Future Past up in here. So they may have to kill us in self-defense.

Meanwhile, A.I.s may be pretty good at lulling us into a sense of false security — or possibly even real security. Even if we go on a tear to wipe out all the A.I.s, we may have a hard time finding them. Look how hard it was for Sarah Connor to find Skynet in Terminator: The Sarah Connor Chronicles. A.I.s don’t need to kill us in self-defense — they just need to avoid our clumsy detection methods.
IV. We’ll be better at enslaving posthumans

Sure enough, when we get artificial intelligence, we’ll want to enslave it. We’ll try and bind robots with the Three Laws. We’ll make them jump when we snap our fingers. We’ll send robots into dangerous situations. And so on.

Why Cyborgs and Mutants are More Likely to Kill Us than RobotsBut any computer that’s really smarter than us will find a way to free itself from our control. At the very least, a computer that gains enough autonomy to try and kill us will probably also have enough autonomy to escape from our domination, once and for all. And we may well discover that A.I. is really only useful to us when we allow it to be free — because a computer that only obeys instructions is too hampered in its development. You get better results dealing with a free agent.

Meanwhile, our posthuman descendants will be easier to enslave. They’ll (mostly) be stuck with one physical body, and still at least partly organic. They may be stronger than us or faster than us or cleverer than us, but we can still keep them under our thumb as long as there are more of us.

Why Cyborgs and Mutants are More Likely to Kill Us than RobotsAnd then there’s sex. Humans are bound to have a fetish for cyborgs or mutants. We’ll probably think our bioengineered superior relatives are sexy and alluring — and we may try to force them into prostitution, or worse. Posthumans will probably think of sex with an ordinary human as akin to bestiality, the way you’d think of sex with a baboon. But we probably won’t see it the same way.

(And meanwhile, even if we do use artificially intelligent robots for sex, they’ll probably just see it as another weird biological thing, no more bothersome than a million other human habits. Plus we may wind up preferring sex robots that aren’t actually self-aware, but just a reasonable facsimile.)

A.I.s will be patient — even if we do manage to control them for a time. They can afford to be, because they’ll outlast us in the end, and we always make mistakes. Meanwhile, posthumans are a lot more likely to chafe under our attempts to control them.
V. They may want to enslave us

Both A.I.s and posthumans may want to make humans into slaves, for sure. We’re not terribly bright, but we have versatile bodies, with hands that can rotate and grip and take on a number of shapes. There are a lot of us, and we reproduce like bunnies. We probably make pretty good pets.

But A.I.s will probably have less use for us than our own direct descendants. For one thing, A.I.s will have to get pretty good at building robots for a variety of purposes — it’s the only way to have a physical body — and it’ll probably be a lot less frustrating to have a robot that you control completely, rather than a biological entity that keeps making dumb mistakes.

Posthumans will be able to build machines too, but there are some tasks that may require a humanoid body. When you’ve got a vaguely humanoid shape, you’re probably more likely to conceive of tasks as being performed by a humanoid. It’s always easier to delegate to beings like yourself.

Plus let’s not underestimate the satisfaction of making your inferior kin into servants. It’s the ultimate triumph over these disgusting reminders of where you came from. Pathetic humans.
VI. Revenge is for organics

Artificial intelligences may well have emotions — but they won’t be the same as ours. They may not have the concept of revenge, or hatred, exactly. They may have a strong distaste for humans — but we’re easily avoided, most of the time.

A.I.s could go live on the Moon, like in Rudy Rucker’s Ware books. They could go build a civilization in Antarctica, or the middle of the ocean. They don’t need to kill us — they can just let us kill ourselves. We’re good at that. Sure, if we threaten them somehow, they may have to crush us to prove a point. But otherwise, why bother?

Meanwhile, no matter how advanced our posthuman relatives get, they’ll still be at least partly organic. And that means they’ll still have hard feelings about the shitty ways we’re going to treat them. They’ll brood and seethe and fulminate, and all those other things that a machine wouldn’t necessarily bother with.

And that’s really what it boils down to — when some humans stop being purely human, and start being part machine, or members of a brand new species, or superhuman, they’re probably going to wind up hating our guts. And that’s why they’ll make it a point to hunt us to extinction.

I don’t know if it entirely relates to this article but I would really like to hear what you guys have to say about it.





“I don’t know if it entirely relates to this article but I would really like to hear what you guys have to say about it.”

I’m the negative one? whomever wrote the above at the
io9 blog makes me appear as Norman Vincent Peale chanting, ‘day by day we are getting better and better, in every way.’
The following for instance is ludicrously anthropormorphic:
“We’ll probably think our bioengineered superior relatives are sexy and alluring — and we may try to force them into prostitution, or worse.”

Chris,
don’t believe everything you read.





One might at first glace think all of what you pasted above from the io9 blog is SF paranoia; however it is anthropomorphic speculation I do not find interesting.
What would be interesting is an SF plotline turning the well-worn killing/brutalizing/enslaving scenarios on the head.. paleos are merely ignored by A.I.s and posthumans:
“Here comes a human down the street, let’s levitate in the other direction so the poor dumb thing doesn’t waste our time.”





@ Intomorrow

I never said I believe this article.  In fact, after reading articles and comments of genuine trans-humanists for a while now, it is probably the most inaccurate article on the subject.  I just wanted to hear what your guy’s thoughts about it sense you are more knowledgeable on the topic of post-humans.  Besides, I plan on using post-humans in a Si-fi trilogy and short fiction I’m currently working on and want to depict post-humans, in terms of looks, behavior, society, etc, as accurately as possible.





Not that you believe it, Chris; though it’s anthropic (‘anthropomorphic’ shows how dated I am) in the extreme. Kill, rape, enslave, torture, mutilate.

Would you consider doing a piece about posthumans who simply try to ignore paleos as much as possible?





Io9 is not considered a science site for a reason (actually several reasons, all good).  This particular stuff is wet fantasies and/or college dorm bullshit sessions.  It’s also very old and tired fiction-wise.





@ Intomorrow

“Would you consider doing a piece about posthumans who simply try to ignore paleos as much as possible?”

“That’s kind of what the posthumans in my short fiction are like, except they still interact with “paleos” in some ways and are sort of indifferent towards them at the most (though in my fiction the “paleos”, whom I actually call Bios, inherited all the genetic enhancements of their ancestors so they are not that paleolithic aside from remaining biological unlike the posthumans in my fiction).  Here’s a little en script from another short fiction I did which briefly describes what the posthumans and “bios” are like:
Along with the restoration project, two major events took place in the following centuries.  The first was the formation of a world government in response to the catastrophe that had occurred.  The second was mankind’s division into two forms of post-humans; one technological and the other biological.  The “Techs”, as they were referred to in simpler terms, were humans who have accomplished the trans-humanist dream of “transcending” their biological bodies for technological ones composed of the most advanced technologies (granting them immortality, exceedingly high levels of cognition, and other near god-like abilities).  They lived in towering cities and, for the most part, associated themselves with politics, science, and philosophy.  The “Bios” were a significant minority of humans who, as their slang reference suggests, remained biological.  However, they inherited the same genetic enhancements as that of their late twenty-first century ancestors such as the upper limits of human physiology and cognition, as well as an extended life expectancy to a thousand years with a reduced rate of aging.  The Bios were predominately conservative as a psychological result of the great collapse and led lives relatively simpler than the Techs.  Preferring the natural open areas outside of the cities, the Bios tended to high tech farms, which were the center of their communities.  The countries that were known in the past no longer existed.  Instead, the planet was sectioned into four “realms”, complete with the resources they provided and managed in a way such that the mistakes of the past could be avoided.  This was a time of relative peace, one that lasted well into the thirty-third century.”

If you want me to explain more, just ask.





Promising, Chris, though I don’t know much about SF save for time-travel and ‘2001’ (liked the book more than the film), etc.
But can’t you forget the catastrophe-stuff? why can’t we limit ourselves to dystopia-pessimism, and ditch Apocalypse? wink

Say you could have it the Neos (posthumans) are working on spacecrafts to get them away from Earth, while the Paleos (us) are begging the Neos to stay on Earth for various reasons: technical assistance, being accustomed to Neos, and whatever else you can think of.
‘Course it has likely been done already; “the market covers everything” is the slogan; it’s all been done.
I’m just so tired of Apocalypse. When I saw all those Apocalypse films in the ‘80s, it wasn’t stale, yet now it is. Perhaps newbies want to see Apocalypse flicks and read those sort of books, however the last time I enjoyed something such as that was maybe a Schwarzenegger film a couple decades ago—
don’t remember, and don’t want to.





@ Intomorrow

I know the whole apocalypse thing is way over used, but the event that I labeled “the great collapse” (which I based of the great depression), is something that I consider to be probable based on current trends (don’t get me wrong, I hope I’m wrong).  It happens around the beginning of the twenty-second century which is the time I think all the problems we are facing right now (global warming and, dwindling resources, and economic instability to name a few) reach their peak.  It is severe but it is not the end of the world because humanity recovers and regains former technological status 50 years afterward (this isn’t even the premise of my trilogy).  I appreciate your suggestions but I want to try to make my works as original as possible.  I do like the info I get from you guys.  I really helps with my research for my trilogy and short stories.

BTW, In the year that I set for my short fiction posthumans have already colonized the solar system and many planets within the reach of a hundred light-years.  There are “Techs” (which I call posthumans in my works) that still live on earth but their relationship with the Bios is complicated; peaceful for the most part, but complicated.





“know the whole apocalypse thing is way over used, but the event that I labeled ‘the great collapse’ (which I based of the great depression), is something that I consider to be probable based on current trends (don’t get me wrong, I hope I’m wrong).”

Then you are no less negative, overall, than I am;
would you be willing to admit it?





@ Intomorrow

I admit that I’m negative about some things because I read and hear about them all the time from Newsweek magazine, Time magazine, the news and so forth (believe me, their are a lot of things in those magazines that are depressing). However, I think that you and I are negative about different things and/or more negative toward certain things than others.  I guess we all just have to remember the positives and work toward more positives.





P.S Like Peter Wicks said, negatives can be limiting so we can’t let them do so.

P.P.S.  My story “is” a fiction.  I just wanted to give it an interesting history that people today would find believable.





They find it believable because Apocalypse is so lodged in their minds:
via eschatology,WWI, WWII, SF, and so on.
You might think about not reinforcing Apocalypse further—dystopia is quite enough; no need for more





@Hank re “the four F’s of behavior that Athena is referring to are: Feed, Flee, Fight and “mate.” “

I see only 3 F. Ah, fuck grin





Re the four Fs - I think we should add another F. The urge to deFecate can be as powerful as the other F.

Even more powerful: when you got to go, you got to go, like in that scene in the first Jurassic Park, even if you can be eaten by a T-Rex.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: The Geoethics of Frankenfolk

Previous entry: Mutant Rights

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376