Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Currency Multiplicity: Social Economic Networks

#21: Your nanorobotics future: life truly becomes ‘magical’

Meaning, Value and the Collective Afterlife: Must others survive for our lives to have meaning?

From German Idealism to American Pragmatism

Torture and the Ticking Time Bomb

#22: Ray Kurzweil on Rationality and the Moral Considerability of Intelligent Machines


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

instamatic on 'Four questions for Social Futurists, and others' (Dec 18, 2014)

CygnusX1 on 'Four questions for Social Futurists, and others' (Dec 18, 2014)

instamatic on 'Four questions for Social Futurists, and others' (Dec 18, 2014)

CygnusX1 on 'Four questions for Social Futurists, and others' (Dec 17, 2014)

instamatic on 'Four questions for Social Futurists, and others' (Dec 17, 2014)

CygnusX1 on 'Four questions for Social Futurists, and others' (Dec 16, 2014)

instamatic on 'Four questions for Social Futurists, and others' (Dec 15, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Review of Michio Kaku’s, Visions: How Science Will Revolutionize the 21st Century
Dec 15, 2014
(9150) Hits
(0) Comments

What Will Life Be Like Inside A Computer?
Dec 7, 2014
(8243) Hits
(0) Comments

Bitcoin and Science: DNA is the Original Decentralized System
Nov 24, 2014
(7649) Hits
(0) Comments

Brain, Mind, and the Structure of Reality
Nov 21, 2014
(5404) Hits
(0) Comments



IEET > Security > Biosecurity > Rights > Neuroethics > Life > Innovation > Vision > Technoprogressivism > Contributors > David Eubanks

Print Email permalink (3) Comments (21955) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Cracks in Reality: How our Systems Fool Themselves


David Eubanks
By David Eubanks
Ethical Technology

Posted: Jun 11, 2012

In the first two parts of this series I explored the idea that a self-modifying singular intelligence may be doomed to self-destruction because of motivational interference [1]. The idea is at least as old as Epicurus, who advised: “If thou wilt make a man happy, add not unto his riches but take away from his desires.”

It’s easier for a self-modifying system to directly fiddle with motivational signals than to satisfy them through direct action (intentionally or not). Standing in the way of this short circuit is a necessary motivation to prefer real solutions to illusory ones. As a consequence, the intelligent agent must audit what it believes to be true and maintain a connection to actual reality. This may or may not be possible to do with logical and empirical rigor. I think it’s fair to say that no one knows if a perfectly unfoolable design is attainable.

The diagram below shows a schematic of what I mean by an intelligent system.

Measurements turn sensory data into the simplified language of a model. I called these descriptions “nominal realities” in the last installment [2]. When everything works correctly the intelligent system is able to:

* Create and improve measurements of external reality to create nominal realities

* Discover connections among descriptions of external reality using this language in order to make predictions of the future

* Solve inverse problems that allow futures not just to be predicted but engineered, taking actions so that motivations are optimized

* Improve all aspects of its own internal design

* Ensure that motivations are optimized in reality, not merely as nominal realities

If this is done successfully, the motivational unit drives the system toward some pre-assigned design goal. The focus of this article is the first of these bullets: the creation of nominal reality through measurement and classification.

How much Reality is Enough?

It’s essential that reality-to-language conversions effectively reduce the amount of information present in external reality while creating useful categories that can be used for cause and effect modeling. If I write that “a cat walked along a window sill and bumped a vase, causing it to break,” you can probably picture some generic version of that event without knowing any of the necessary details that would be present in a real instance (up to the physical states of all the matter and energy in the room). This is an incredible data compression ratio, and its usefulness depends on the utility of the language with regard to our motivations.

When we translate physical phenomena into language we inevitably take leave of reality. One way to assure ourselves that we’re not inventing complete fictions is by taking the whole trip around the loop, from perception to action and back again. An interesting example of this is phantom limb syndrome, as described by V. S. Ramachandran in [3]. Patients who had lost limbs sometimes still had pain from these ghosts, which were often stuck in awkward positions. A successful treatment was found using mirrors to create the illusion that the whole perception-action loop was complete, tricking some subjects’ brains into accepting that the limbs were moving again, and relieving the pain.

In the case of phantom limbs, a mind’s internal set of signals (nominal realities) are clearly out of sync with external reality. But even when everything works correctly, there are obvious approximations in our perceptions. The light that hits a human retina, for example comes in all sorts of frequencies and polarizations, but we can only perceive a minute part of this information, as if we listened to a symphony but could only distinguish between three particular notes.

Donald Hoffman’s interface theory of perception predicts that perceptions that truly represent reality are not conducive to evolutionary success at all [4].

f true perceptions crop up, then natural selection mows them down; natural selection fosters perceptions that act as simplified user interfaces, expediting adaptive behavior while shrouding the causal and structural complexity of the objective world.

In other words, the price for knowing reality is too expensive for a fitness-driven organism to afford. There are many biological examples that reveal the limitations of perception and resulting survival strategies. The Gaboon Viper is an illustrative case. It presents a visual aspect to its prey that breaks up the shape of the snake to make identification difficult (see the photo below).

This illusion is so good that the snake can just wait for prey to walk by. However, it is vulnerable to being trodden upon by large animals, so being hidden from above is a disadvantage. The second photo shows the top-down appearance of the viper, which seems to attract attention with large high-contrast regular blocks of color, rather like the stripes down the center of a highway.

Both of these strategies are evolved to affect the nominal realities of other animals. This pattern is repeated elsewhere in the animal kingdom, as with large cats that have white patches on the backs of their ears for the kittens to see in the dark. Military vehicles employ the same principle in convoys. Co-evolution even creates reliable signals between species that result in cooperation, like flowering plants and the insects that pollinate them. Even without formal language like “detected” or “not detected,” the effect of classification reflects real conditions, but only as a simplified “interface” that reveals just enough of reality.

There is, however, a difference between the evolutionary systems that Hoffman theorizes about and our hypothetical self-modifying intelligent agent. In the former case, natural selection pushes representations of reality toward optimizations for reproduction and short-term survival. Misrepresentations of reality are fine as long as they don’t interfere with going forth and multiplying. But a singular intelligence can’t be improved in the same way an ecology can: it can’t learn from dying. It will have to optimize its perception ability and descriptive language intentionally. In so doing, it’s imperative that it doesn’t stray so far from actual reality that it creates threats to its own existence.

Implications for Singular Intelligent Systems

Intelligent systems have to make intentional compromises about how much reality is enough, and accept commensurate vulnerabilities. In 1994, Intel’s Pentium chip was found to have an error with a division algorithm that would be extremely rare to encounter in practice, but criticism was fierce, and the company ultimately recalled the chips. But much more recently, an award-winning research paper for the Association for Computing Machinery touts having found a way to make chips significantly more energy efficient by “pruning” rarely-used circuit branches, sacrificing perfect accuracy for energy savings, for example in showing video [5]. In this case, exactly the same phenomenon (imperfect computing) is seen as an acceptable trade-off between reality and economic considerations.

How well we need to model reality depends on the situation. A report to a US congressional oversight committee about a Gulf War missile attack shows how critical this evolution can be [6]:

The Patriot had never before been used to defend against Scud missiles nor was it expected to operate continuously for long periods of time. Two weeks before the incident, Army officials received Israeli data indicating some loss in accuracy after the system had been running for 8 consecutive hours. Consequently, Army officials modified the software to improve the system’s accuracy. However, the modified software did not reach Dhahran until February 26, 1991—the day after the Scud incident.

Humans are products of an evolutionary history, and not good examples of singular intelligent systems. In fact, we don’t have any examples of long-lived self-changing intelligent systems. We might, however, substitute modern bureaucratized states as approximations to intelligent systems. Some have lasted hundreds of years (compared to billions for our biological ecology). These bureaucracies have become so dependent on the catalogs of nominal realities they incorporate that it’s hard to imagine how they would function without them. Here’s Isaiah Berlin on this topic, quoted from a review article by B. Kafka [7]:

Where ends are agreed, the only questions left are those of means, and these are not political, but technical, that is to say, capable of being settled by experts or machines like arguments between engineers or doctors. That is why those who put their faith in some immense, world-transforming phenomenon, like the final triumph of reason or the proletarian revolution, must believe that all political and moral problems can thereby be turned into technological ones. This is the meaning of Saint-Simon’s famous phrase about ‘replacing the government of persons by the administration of things’, and the Marxist prophecies about the withering away of the state and the beginning of the true history of humanity.

If the motivations are fixed, informal human decision-making is gradually replaced by formalized systems implemented in technology:  massive bureaucracies that ‘administer things.’ If we take this to be a necessary feature of a human designed self-modifying intelligent system, then we should study how closely the classifications in this language and the associated models reflect external reality. How much reality is enough? If we were as coldly analytical about these questions as the report on the Patriot missile, what would we find?

There are certainly spectacular successes to report. The world has a functioning ‘external immune system’ that attempts to detect and describe infectious pathogens. This is a great advance over previous centuries. This is but one example of the incredible precision with which certain kinds of physical reality can be described, modeled, and engineered. Physicists can build clocks that theoretically wouldn’t lose a second in a hundred million years.

A larger conclusion is that organized science has been a successful demonstration of one of the requirements of a long-lived intelligent system: that it can update and improve its nominal representation of physical reality. Moreover, it has been accomplished by an analogue to natural selection: over time, theories with more explanatory power have won out over those with less.

But that’s not the whole story.

Nominal Reality Failures

Despite the successes in science with regard to ever more refined instrumentation, the role of motivation can corrupt measurements as described in [1].

Even in the business of doing science itself, objectivity in the value of new findings is suspect because the publication process is so fraught with strong motivations. A recent piece in the New York Times contained this quote from a science journal editor:

Dr. Casadevall, now editor in chief of the journal mBio, said he feared that science had turned into a winner-take-all game with perverse incentives that lead scientists to cut corners and, in some cases, commit acts of misconduct.

“This is a tremendous threat,” he said.

The active manipulation of the nominally objective is obviously a threat in general. Such effects can also come from passivity. The satellite fleet that monitors climate in the United States may suffer serious losses before (if) it is replaced, because of politics over the expense of the program [9]. The politics of climate change illustrate motivations that conflict with scientific measurements and models. The “hockey stick” graph of global temperature increases ignited a political storm that still continues. A localized example of related nominal reality tampering can be found in the activities of NC-20, a group that seeks to ensure that official projections of sea level rise in North Carolina are calculated in a way that does not take into account any acceleration that might be due to non-linear climate change [10]. As nominal reality, this has economic value in the form of future value of property that affects grants and loans. The map is taken from [11], showing the predicted effect of sea level rise on North Carolina’s coast. The red and blue regions represent two nominal realities, with the choice being made economic, rather than scientific, considerations.

These examples and many others show that the motivation to know what is actually real in human organizations is not always stronger than conflicting motivations. In extreme cases, state-adopted nominal realities may be purely driven by motivations, and have little or no connection to reality. This seems to have been the case with Stalin’s definition of “kulak,” for example—he needed a scapegoat and so created vague and shifting definitions of who this internal enemy was: a reification fallacy.

A final example clearly shows the corrosive effect of motivation on the language used to describe measurements. The New York Times reports in [12] that a US drone strike early in the Obama administration resulted in civilian deaths, which was sharply questioned by the new president. The motivation is clear: civilian deaths are a tragedy, and should be avoided for moral and political reasons. However, the weapons in use are not discriminating enough to sort out militants from civilians when the bomb detonates, so this creates a tension between reality and motivation. The solution reported in the article was to modify the nominal reality:

[The new definition] in effect counts all military-age males in a strike zone as combatants, according to several administration officials, unless there is explicit intelligence posthumously proving them innocent.

So the burden of a reality-audit is placed on proving a victim to be a civilian. I assume there is not much motivation to make that attempt. It’s a convenient way to ignore complicated and unpleasant realities.

Conclusions

Self-modifying intelligent systems will always rely on imperfect representations of reality. Sometimes these can be intentionally controlled, as with the chip-making example. Motivation is supposed to come into play only after measurements occur, but when this is not the case it creates vulnerabilities and demonstrates that whatever reality audits are in place are inadequate.

Perfect reality auditing may be logically and empirically impossible. Certainly, the large bureaucracies we live in are riddled with self-fulfilling definitions and measurements, and we have no evidence that these systems can last very long. Nor are there any implementations or even designs for theoretical systems, such as artificial intelligence, that would provably overcome this difficulty.

But even if perfection is unobtainable, there is much we can do to better audit our bureaucracies and other designed systems to improve longevity. We just have to be strongly motivated not to fool ourselves.

 

 

 


References

 


[1] Eubanks, D. (2012, March 10) Is intelligence self-limiting?” Institute for Ethics and Emerging Technologies, Retrieved from http://ieet.org/index.php/IEET/more/eubanks20120310

[2] Eubanks, D. (2012, April 2) Self-limiting intelligence: Truth or consequences”  Institute for Ethics and Emerging Technologies, Retrieved from http://ieet.org/index.php/IEET/more/eubanks20120402

[3] Ramachandran, V. S., (2013). The tell-tale brain. W W Norton & Co Inc.

[4] Hoffman, D. (2009). The interface theory of perception: Natural selection drives true perception to swift extinction. In Dickinson, Leonardis, Schiele & Tarr (Eds.), Object Categorization: Computer and Human Vision
Perspectives (pp. 148-265). Retrieved from http://www.cogsci.uci.edu/~ddhoff/interface.pdf

[5] Joneitz, E. (2012, April). Tr10: Probabilistic chips.Technology Review, Retrieved from http://www.technologyreview.com/energy/20246/

[6] Carlone, R. V. (1992, Febr 04). fas.org. Retrieved from http://www.fas.org/spp/starwars/gao/im92026.htm

[7] Kafka, B. (2012, May 21). The administration of things: A genealogy. West 86th, Retrieved from http://www.west86th.bgc.bard.edu/articles/the-administration-of-things.html

[8] Zimmer, C. (2012, April 16). A sharp rise in retractions prompts calls for reform. The New York Times, Retrieved from http://www.nytimes.com/2012/04/17/science/rise-in-scientific-journal-retractions-prompts-calls-for-reform.html?pagewanted=all

[9] Samenow, J. (2012, April 18). Senate bill proposes moving satellite programs from noaa to nasa. The Washington Post. Retrieved from http://www.washingtonpost.com/blogs/capital-weather-gang/post/senate-bill-proposes-moving-satellite-programs-from-noaa-to-nasa/2012/04/18/gIQA2p9jQT_blog.html

[10] Nc-20. (n.d.). Retrieved from http://www.nc-20.com/

[11] Titus, J., & Richman, C. (2001). Maps of lands vulnerable to sea level rise: modeled elevations along the u.s. atlantic and gulf coasts. Climate Research, 18, 205-228. Retrieved from http://www.int-res.com/articles/cr/18/c018p205.pdf

[12] Becker, J., & Shane, S. (29 , May 20). Secret ‘kill list’ proves a test of Obama’s principles and will. . Retrieved from http://www.nytimes.com/2012/05/29/world/obamas-leadership-in-war-on-al-qaeda.html

 


David Eubanks holds a doctorate in mathematics and works in higher education. His research on complex systems led to his writing Life Artificial, a novel from the point of view of an artificial intelligence.
Print Email permalink (3) Comments (21956) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Perhaps we can apply this thinking to our comment threads here. André was right to point out on another thread that positions taken here (in that case with regard to religion, but there are plenty of other examples) easily become radicalised through a kind of intellectual immune response. I think this is closely related to David’s point about motivational bias.

I especially like David’s distinction between evolutionary systems and singular intelligence. A lot of commenters here, including myself, have been behaving to a significant extent as products of evolution (not surprisingly, since we are), and have been posting comments that make us feel good at the time, but don’t necessary serve our long-term interests.

Alex McGilvery recently write an article about free will, in which he pointed out the irony of denying that we have such a thing. I agree. Any kind of discourse that seeks to influence the future implies free will in some sense, and why would we even bother to breathe if it were not to influence the future in accordance with our desires?

Which is to say that we have a choice. We can just do what comes naturally, or we can work on those motivational biases. In addition to auditing our bureaucracies and “other designed systems”, we also need to audit (redesign) ourselves. Especially if we want to survive to see the singularity smile





Great article. I like the divide between the blind evolutionary choices and the possibility of choosing a clearer portion of reality to see.

@Peter. I think you are quite right about the commenting. This means I will have to put even more thought into what I say.





Amazing piece. I really enjoyed reading it - I liked in particular its sophisticated epistemological perspective. We need more pieces like this, at least occasionally.

It would be interesting to examine more in depth the essence of these motivational biases, as Peter called them. I do believe that it is very important to analyze better this issue.

Biological organisms have a number of hardwired motivational mechanisms. We cannot escape them. Reproductive fitness determines the objects of our desires, our capacity to recognize these objects, and a number of strategies to attain to them. Such motivational, teleological pattern probably still operate when we try to do scientific activities, and when we work on merely technical protocols.

So, I wonder - which kind of motivation should we give to a singular adaptive cognitive system? Teleological tensions are easy to create, or emulate, thanks with specific rewarding feedbacks. There is indeed a certain, aesthetic pleasure in pure theoretical contemplation. Is this all the motivation we need? I do not think so. And also, which bias do we want AIs to have? Which kind of motivations are we going to provide? Without motivational bias nobody would move a finger (or a neuron). So, how can we tell WHICH of our typically biological drives interferes NEGATIVELY with our purely theoretical enterprises? I also tend to assume that - some of our biological drives positively enhance our intellectual performances. Having stupid ideas does not kill our mind on the spot, but probably a the most stupid of our forefathers did not live long enough to reproduce.

And also, more importantly on the short run - which biological biases should be compressed, or diverted, to improve our future (if any)?





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Determining Personhood: Not Black & White, But Many Shades of Gray

Previous entry: ‪Nanotechnology Pain Relief‬

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376