The Transhuman World
David Eubanks
2014-11-16 00:00:00

Ubiquitous Interconnected Smart Machines

[…] High Frequency Traders aggressively trade in the direction of price changes […and…] may compete for liquidity and amplify price volatility.

(Kirilenko, Kyle, Samadi, and Tuzun 2011)

Newton’s theory of gravity was initially ridiculed for its “action at a distance” mysticism, particularly by those who were beginning to see the universe as a mechanical analog, built from atoms that kept causes close to effects (Kearney 1971). It was the clockwork philosophy of Galileo and many others that led to much of the technology we now take for granted, ultimately co-opting Newton’s ideas. And so a world of machines emerged: big clacking iron things that led to microscopic silent ones, and these have now begun to whisper among themselves.

Whatever a trans-humanist is, xe will live in a world that resembles the dark ages in one respect: where mystical action-at-a-distance occurs due to unknown motivations (Eubanks 2013). Xe will have access to unbounded information, but won’t know why xer toaster oven turns itself on at random intervals. The explanations from the experts will be caught in the clockwork trap of tracing the logical dominoes that fall in order: Well your oven was subject to a zero-day exploit that corrupted the BIOS. There must be something wrong with your firewall. Such explanations leave unanswered the question of the motivation of the person behind the attack and the location of origin, and like a medieval lightning strike may as well be thought of as divine punishment. Let’s look at an example.

You’re on a freeway in a self-driving car (a true automobile, in other words) from a service. Because it’s your cakeday, you’ve purchased privileged access to the second lane. The rightmost (American-style) lane is the “commodity” road: everyone who pays the base entry rate is guaranteed access, but it’s crowded and slow. The second lane has limited traffic density, and the cars bid up the price for access depending on demand.

This happens invisibly and nearly instantly, and your access is based on the upper price limit you’ve set for the trip. The third lane is too expensive for you, and you cast an envious glance at the cars speeding by on your left. As you do, the car you’re traveling in slows to a crawl and then merges back into the commodity lane. You notice that all the other lane-two traffic seems to be doing the same, leaving a ribbon of uninhabited highway between lanes one and three.

Your heads-up display shows the cost of lane two travel is now astronomical, despite no one being on the road. You pull up local news feeds for #I485sucks and see a stream of complaints already emerging. One of these leads you to #flashcrash, and you find yourself immersed in the generic problems of autonomous user agents engaged in high frequency bidding. Some algorithm may have gone screwy, or a transit company may be trying to drive up the cost in order to cash in a derivatives bet, or it may just be a random emergent property of the game theoretic dynamics. After an hour of reading, still inching along in the commodity lane, you still don’t know what the cause is, or where it’s coming from, or if there’s a motivation behind it. You never will. A purely mystical explanation is as good as any.

The predicted saturation of smart machines is being called “The Internet of Things” (Wasik 2013), and the utopian vision is that this distributed sensory and reporting network will improve our lives by adding the magical ingredient of “big data mining.” It also has the possibility of delocalizing cause and effect for anything we depend on technology for, and for creating shadow economies with complexities of resource and competition that no one will understand.

Continuous Classification




(It) is as if some cynical genius had designed a huge complex penal colony in the sunshine, eliminating the need for guard towers and barbed wire by merely beaming a gigantic electronic message at the inmates, day and night. You are in heaven! Be Happy!

(MacDonald 1964)




We can think of survival (of an individual, an idea, a society, etc.) as the problem of transmitting information through a noisy channel. (von Neumann 1966). Reversing this, we can make an evolutionary argument that those aspects of our universe that survive are particularly good at transmitting information through time and space, converging on error-correction mechanisms that are found in nature. For example, stable atoms have a permanence that transmits their properties through time, so building survival-machines out of atoms (as opposed to, say, electrical fields) is a good start. In biology we see a combination of naturally stable constructions and active error-correction.

The latter is the most interesting for our purposes here, and I will divide active error correction into two types: Am-I-Me and Are-You-Me classification tests. An example of the first is cell apoptosis: a self-destruction triggered by failure to pass a self-check of genetic fidelity, a biological kin to the humble “parity bit.” An example of Are-You-Me is an organism’s immune system, which has the informational challenge of distinguishing self from other. We can easily imagine that the world of the transhuman contains local and delocal mechanisms for Am-I-Me and Are-You-Me for the benefit of the larger society (really for the continued survival of bio-mechanical super-organisms).

​An example is an externalized immune system that works at the system level, like an extension of the World Health Organization. A person who has a dangerous infectious disease fails the society’s Are-You-Me test. Infection could be detected with on-person sensors that constantly stream heart rate data, temperature, O2 level, and other biometrics to a central classifier. More intrusive mechanisms could actually watch proteins in the blood stream. This enables seamless quarantine by tagging suspected people and restricting movement and prescribing intervention. Because it’s all information, we can extend this biological example to the idea-space where a classifier constantly updates your Are-You-Me status (from 0 to 1, say), and takes action if you look like an ideological threat.

Classifiers can work at a global or national scale, but there are many examples at the level of consumer products and services: devices that won’t accept replacement parts from competitors (Munarriz 2014) and new products that intentionally degrade older models (Rampell 2013). We use the term ‘ecology’ for services from companies like Microsoft, Google, and Apple with good reason.

Classifiers make errors that are either false positives (an auto-immune response) or false negatives (allowing a real threat to go undetected). The overall accuracy will depend on the intelligence behind the classifier and on how frequently the threat appears. Detecting and interdicting jay-walking is an easy problem. Preventing an undefined terrorist act is a hard problem. Hence the rationale that the classifier can never have enough information. We already see this in the vacuum-cleaner approach governments have to data collection on ordinary citizens. The point is that our transhumanist can expect that xer life is transparent to Are-You-Me systems and that even an attempt to hide information is seen as a threat.

More insidious is the institutionalization of Am-I-Me systems in hardware, software, or wetware. Lenin said “Give me the child for eight years and it will be a Bolshevik forever.” Education and intentional interfaces can create a generation that fears the isolation of not constantly streaming every thought and act to the Great Classifier for instant approval (Halpern 2014).

We already see examples of how such classifiers quickly become institutionalized and breed “helper systems.” Consider the US tax code. Even for individual taxpayers (i.e. not corporations), the rules are hugely complex. Commercial services that help construct filings that assure the rules are followed provide guidance to the taxpayer and remove the burden of a lot of extra work from the IRS, since they don’t have as many errant reports to fix. Extrapolating this idea to a time where all individual actions are observed, recorded, and parsed for rules violations, it will be essential to have real-time “helper” applications that restrain individual behavior to within limits acceptable to the Classifier.

Imagine driving on the public roads nowadays and having each of your actions compared instantly to the traffic laws, with tickets incurred for every violation of speed limits, failure to completely stop, following too closely, not having wipers and lights on in the rain, and so on. Having on-board warnings that prevent infractions before they happen, or even directly intervene (to make a complete stop, for example) would be essential. Now extend that to the social realm, including what you say, shop for, where you go, whom you associate with, what you read, and so on. Google Labs tried building a social helper, called “goggles” for gmail (Perlow 2008) that would force users to take a “sobriety” test before sending email in the wee hours. That feature seems to be gone now, but another helper function reminds you to attach a file if it looks like you meant to but forgot.

Smart helpers that steer us away from the wrong side of the Classifier create an Am-I-Me self-evaluation that makes Are-You-Me tests more efficient by eliminating false positives and reducing enforcement costs. Thinking of using Tor for anonymous browsing? Your personal Helpr Kricket suggests that while not illegal, it will increase your risk of being misclassified as a national security threat (Poulsen, 2014). Going to joke about a bomb on Twitter? Helpr Kricket can suggest safer forms of entertainment.

The historical success of religions in instilling Am-I-Me self-regulation even without high technology suggests a robust future for Helpr Kricket (Kickstarter, anyone?). Pervasive observation and instant reinforcement of norms intended for organizational survival could easily drive a wedge between what a transhumanist would want to say and do, and what xe is compelled to do. But there is a ready solution for this cognitive dissonance: the many pretend worlds that exist to gratify.

Virtualism

Ah, who can satisfy our needs?

Neither angels nor humans.

And the animals intuit already

that we aren’t at home in a virtual world.

Perhaps somewhere a tree on a hill remains for us

to really see.

(original German by R. M. Rilke 1989, pg 151)

My rather free translation of Rilke’s 1923 text is an attempt to update the sentiment for the transhuman. There isn’t enough reality to go around, and so as a species we create volumes of cheap knock-offs, including the whole entertainment industry. Any act of imagination sets up a virtual substitute to reality: perhaps one we aspire to or one we fear. Actual reality (insofar as we agree on what that is) is limited in time and space, which leads to the customs of exclusive ownership and privilege. If the sections above are accurate depictions of the transhuman’s world, xe will find xerself in a perplexing and tightly monitored world—humans will finally have created those capricious gods we always dreamed of.

There is little use in appealing for help to the powers in charge (Rilke tells us every angel is terrifying) or other humans who are in the same situation. But the freedom of alternate realities is there to satisfy the need. Don’t own a sailboat, but would like to spend the weekend out with the family tacking across the bay? Just strap on, plug in, and hike out. Boats are pricey but bits are cheap. You can act out satisfying substitutes in any number of “reel” worlds. The transhuman may be an accidental solipsist. Once Google’s and IBM’s mechanical children pass their Board-Certified Turing Tests, xey won’t even need real friends, just reel ones calibrated to xier respective individual characteristics.

This represents a paradise for the transhuman who can leave behind romantic attachments to the old world of dirt and sweat and competition. Rilke’s prophesy that the ties to reality are too deep to be so easily interpreted away may only apply to a minority: those who can bear to unplug and admire a tree with the original human install package without the itch to have the experience ‘liked’ by others.

As long as the transhuman is an independent biological organism, however, xe still has to compete with the resources needed to physically maintain life. No amount of virtual blood will replace the need for the real stuff. The stark difference between the complexity of real needs and relative ease of attaining reel ones suggests a gradual replacement of the former with the latter. Powdered food (Just add Watr!) can be made more palatable by “marking up” reality with the virtual sights, sounds, and smells of a gourmet meal. This bears most heavily on reproductive rights and behaviors, since having and rearing a child is very resource-demanding, and not just in CPU cycles. The largest class distinction between transhumans, then, may be the difference between having real offspring or reel ones.

A More Optimistic Possibility

In the sections above I have sketched a rather gloomy outlook for transhumans, which in sum arrive at a dystopia like the one I explored in “A Promise of a Kiss” (Eubanks 2012), but each of the problems has an inverse. Ubiquitous smart machines could transform the physical world in miraculous ways. Anyone who remembers trying to read a road map in the car with the exit looming understands well the heavenly powers of the GPS. Likewise, system-wide Are-You-Me classifiers can keep us safe from disease and violence. And flipping virtualism, we can see the possibility that a transhuman might experience more, not less of reality with augmented physical senses: seeing more of the spectrum, feeling magnetism, and so on.

The spectrum between a cybernetic Dark Age and a paradise is indexed by the political/economic philosophy that adjudicates individual versus collective good. There’s no way to know how that will turn out—we’re in the process of building it now. Or it might be more accurate to say that it’s happening to us now, given the apparent lack of awareness and leadership in governments and their implicit or explicit laissez faire attitude toward the monumental changes society is on the cusp of.

The example of biological evolution suggests that dependency networks that evolve solely out of resource dependencies (with no overarching teleology) produce unfathomably complex systems with predators, parasites, and population booms and busts, and the occasional random destruction of almost everything (e.g. oxygenation of the atmosphere).

We might venture to guess that a future transhuman world that avoids the worst of the outcomes described in the earlier sections depends on enlightened and intentional guidance that begins with our generation.

References

Eubanks, David “A Promise of a Kiss,” http://lifeartificial.com/Promise.pdf, 2012.

Eubanks, David “How the Singularity Makes us Dumber”, IEET.org, http://ieet.org/index.php/

IEET/more/eubanks20130529
, 2013

Halpern, Sue, “The Creepy New Wave of the Internet,” The New York Review of Books, http://www.nybooks.com

/articles/archives/2014/nov/20/creepy-new-wave-internet/
, 2014

Kearney, Hugh. Science and change, 1500-1700. New York: McGraw-Hill, 1971.

Kirilenko, Andrei, et al. "The flash crash: The impact of high frequency trading on an electronic market." Manuscript, U of Maryland (2011).

MacDonald, John D The Quick Red Fox, Fawcett Publications, 1964.

Munarriz, Rick. "Keurig 2.0 Is Leaving a Bitter Taste in a Lot of Mouths." DailyFinance.com. http://www.dailyfinance.com/2014/10/09/keurig-2.0-bitte

r-taste-coffee-drinkers-competitors
. 9 Oct. 2014.

Poulsen, Kevin, “Visit the Wrong Website, and the FBI Could End Up in Your Computer,” Wired, http://www.wired.com/2014/08/operation_torpedo/, 2014

Perlow, John “New in Labs: Stop sending mail you later regret,” Official Gmail Blog, http://gmailblog.blogspot.com/2008/10/new-in-labs-sto

p-sending-mail-you-later.html
, 2008

Price, Michael. "I’m Terrified of My New TV: Why I’m Scared to Turn This Thing on — and You’d Be, Too." Salon.com. 30 Oct. 2014. Web. 4 Nov. 2014.

Rampell, Catherine. "Cracking the Apple Trap." The New York Times. The New York Times, http://www.nytimes.com/2013/11/03/ma

gazine/why-apple-wants-to-bust-your-iphone.html
, 2 Nov. 2013.

Rilke, Rainer “Duino Elegies” in The Selected Poetry of Ranier Maria Rilke, Random House, 1989.

von Neumann, John. The Theory of Self-Reproducing Automata. Arthur Burks (Ed.) University of Illinois Press. 1966

Wasik, Bill. "In the Programmable World, All Our Objects Will Act as One." Wired.com. http://www.wired.com/2013/05

/internet-of-things-2/all/
, 12 May 2013.