#9 Cracks in Reality: How our Systems Fool Themselves
David Eubanks
2012-12-23 00:00:00







According to IEET readers, what were the most stimulating stories of 2012? This month we’re answering that question by posting a countdown of the top 16 articles published this year on our blog (out of more than 600 in all), based on how many total hits each one received.

The following piece was first published here on Jun 11, 2012 and is the #9 most viewed of the year.

 







It’s easier for a self-modifying system to directly fiddle with motivational signals than to satisfy them through direct action (intentionally or not). Standing in the way of this short circuit is a necessary motivation to prefer real solutions to illusory ones. As a consequence, the intelligent agent must audit what it believes to be true and maintain a connection to actual reality. This may or may not be possible to do with logical and empirical rigor. I think it’s fair to say that no one knows if a perfectly unfoolable design is attainable.

The diagram below shows a schematic of what I mean by an intelligent system.

Measurements turn sensory data into the simplified language of a model. I called these descriptions “nominal realities” in the last installment [2]. When everything works correctly the intelligent system is able to:

* Create and improve measurements of external reality to create nominal realities

* Discover connections among descriptions of external reality using this language in order to make predictions of the future

* Solve inverse problems that allow futures not just to be predicted but engineered, taking actions so that motivations are optimized

* Improve all aspects of its own internal design

* Ensure that motivations are optimized in reality, not merely as nominal realities

If this is done successfully, the motivational unit drives the system toward some pre-assigned design goal. The focus of this article is the first of these bullets: the creation of nominal reality through measurement and classification.

How much Reality is Enough?

It’s essential that reality-to-language conversions effectively reduce the amount of information present in external reality while creating useful categories that can be used for cause and effect modeling. If I write that “a cat walked along a window sill and bumped a vase, causing it to break,” you can probably picture some generic version of that event without knowing any of the necessary details that would be present in a real instance (up to the physical states of all the matter and energy in the room). This is an incredible data compression ratio, and its usefulness depends on the utility of the language with regard to our motivations.

When we translate physical phenomena into language we inevitably take leave of reality. One way to assure ourselves that we’re not inventing complete fictions is by taking the whole trip around the loop, from perception to action and back again. An interesting example of this is phantom limb syndrome, as described by V. S. Ramachandran in [3]. Patients who had lost limbs sometimes still had pain from these ghosts, which were often stuck in awkward positions. A successful treatment was found using mirrors to create the illusion that the whole perception-action loop was complete, tricking some subjects’ brains into accepting that the limbs were moving again, and relieving the pain.

In the case of phantom limbs, a mind’s internal set of signals (nominal realities) are clearly out of sync with external reality. But even when everything works correctly, there are obvious approximations in our perceptions. The light that hits a human retina, for example comes in all sorts of frequencies and polarizations, but we can only perceive a minute part of this information, as if we listened to a symphony but could only distinguish between three particular notes.

Donald Hoffman’s interface theory of perception predicts that perceptions that truly represent reality are not conducive to evolutionary success at all [4].



f true perceptions crop up, then natural selection mows them down; natural selection fosters perceptions that act as simplified user interfaces, expediting adaptive behavior while shrouding the causal and structural complexity of the objective world.




In other words, the price for knowing reality is too expensive for a fitness-driven organism to afford. There are many biological examples that reveal the limitations of perception and resulting survival strategies. The Gaboon Viper is an illustrative case. It presents a visual aspect to its prey that breaks up the shape of the snake to make identification difficult (see the photo below).

This illusion is so good that the snake can just wait for prey to walk by. However, it is vulnerable to being trodden upon by large animals, so being hidden from above is a disadvantage. The second photo shows the top-down appearance of the viper, which seems to attract attention with large high-contrast regular blocks of color, rather like the stripes down the center of a highway.

Both of these strategies are evolved to affect the nominal realities of other animals. This pattern is repeated elsewhere in the animal kingdom, as with large cats that have white patches on the backs of their ears for the kittens to see in the dark. Military vehicles employ the same principle in convoys. Co-evolution even creates reliable signals between species that result in cooperation, like flowering plants and the insects that pollinate them. Even without formal language like “detected” or “not detected,” the effect of classification reflects real conditions, but only as a simplified “interface” that reveals just enough of reality.

There is, however, a difference between the evolutionary systems that Hoffman theorizes about and our hypothetical self-modifying intelligent agent. In the former case, natural selection pushes representations of reality toward optimizations for reproduction and short-term survival. Misrepresentations of reality are fine as long as they don’t interfere with going forth and multiplying. But a singular intelligence can’t be improved in the same way an ecology can: it can’t learn from dying. It will have to optimize its perception ability and descriptive language intentionally. In so doing, it’s imperative that it doesn’t stray so far from actual reality that it creates threats to its own existence.

Implications for Singular Intelligent Systems

Intelligent systems have to make intentional compromises about how much reality is enough, and accept commensurate vulnerabilities. In 1994, Intel’s Pentium chip was found to have an error with a division algorithm that would be extremely rare to encounter in practice, but criticism was fierce, and the company ultimately recalled the chips. But much more recently, an award-winning research paper for the Association for Computing Machinery touts having found a way to make chips significantly more energy efficient by “pruning” rarely-used circuit branches, sacrificing perfect accuracy for energy savings, for example in showing video [5]. In this case, exactly the same phenomenon (imperfect computing) is seen as an acceptable trade-off between reality and economic considerations.

How well we need to model reality depends on the situation. A report to a US congressional oversight committee about a Gulf War missile attack shows how critical this evolution can be [6]:



The Patriot had never before been used to defend against Scud missiles nor was it expected to operate continuously for long periods of time. Two weeks before the incident, Army officials received Israeli data indicating some loss in accuracy after the system had been running for 8 consecutive hours. Consequently, Army officials modified the software to improve the system’s accuracy. However, the modified software did not reach Dhahran until February 26, 1991—the day after the Scud incident.




Humans are products of an evolutionary history, and not good examples of singular intelligent systems. In fact, we don’t have any examples of long-lived self-changing intelligent systems. We might, however, substitute modern bureaucratized states as approximations to intelligent systems. Some have lasted hundreds of years (compared to billions for our biological ecology). These bureaucracies have become so dependent on the catalogs of nominal realities they incorporate that it’s hard to imagine how they would function without them. Here’s Isaiah Berlin on this topic, quoted from a review article by B. Kafka [7]:



Where ends are agreed, the only questions left are those of means, and these are not political, but technical, that is to say, capable of being settled by experts or machines like arguments between engineers or doctors. That is why those who put their faith in some immense, world-transforming phenomenon, like the final triumph of reason or the proletarian revolution, must believe that all political and moral problems can thereby be turned into technological ones. This is the meaning of Saint-Simon’s famous phrase about ‘replacing the government of persons by the administration of things’, and the Marxist prophecies about the withering away of the state and the beginning of the true history of humanity.




If the motivations are fixed, informal human decision-making is gradually replaced by formalized systems implemented in technology:  massive bureaucracies that ‘administer things.’ If we take this to be a necessary feature of a human designed self-modifying intelligent system, then we should study how closely the classifications in this language and the associated models reflect external reality. How much reality is enough? If we were as coldly analytical about these questions as the report on the Patriot missile, what would we find?

There are certainly spectacular successes to report. The world has a functioning ‘external immune system’ that attempts to detect and describe infectious pathogens. This is a great advance over previous centuries. This is but one example of the incredible precision with which certain kinds of physical reality can be described, modeled, and engineered. Physicists can build clocks that theoretically wouldn’t lose a second in a hundred million years.

A larger conclusion is that organized science has been a successful demonstration of one of the requirements of a long-lived intelligent system: that it can update and improve its nominal representation of physical reality. Moreover, it has been accomplished by an analogue to natural selection: over time, theories with more explanatory power have won out over those with less.

But that’s not the whole story.

Nominal Reality Failures

Despite the successes in science with regard to ever more refined instrumentation, the role of motivation can corrupt measurements as described in [1].

Even in the business of doing science itself, objectivity in the value of new findings is suspect because the publication process is so fraught with strong motivations. A recent piece in the New York Times contained this quote from a science journal editor:



Dr. Casadevall, now editor in chief of the journal mBio, said he feared that science had turned into a winner-take-all game with perverse incentives that lead scientists to cut corners and, in some cases, commit acts of misconduct.

“This is a tremendous threat,” he said.




The active manipulation of the nominally objective is obviously a threat in general. Such effects can also come from passivity. The satellite fleet that monitors climate in the United States may suffer serious losses before (if) it is replaced, because of politics over the expense of the program [9]. The politics of climate change illustrate motivations that conflict with scientific measurements and models. The “hockey stick” graph of global temperature increases ignited a political storm that still continues. A localized example of related nominal reality tampering can be found in the activities of NC-20, a group that seeks to ensure that official projections of sea level rise in North Carolina are calculated in a way that does not take into account any acceleration that might be due to non-linear climate change [10]. As nominal reality, this has economic value in the form of future value of property that affects grants and loans. The map is taken from [11], showing the predicted effect of sea level rise on North Carolina’s coast. The red and blue regions represent two nominal realities, with the choice being made economic, rather than scientific, considerations.

These examples and many others show that the motivation to know what is actually real in human organizations is not always stronger than conflicting motivations. In extreme cases, state-adopted nominal realities may be purely driven by motivations, and have little or no connection to reality. This seems to have been the case with Stalin’s definition of “kulak,” for example—he needed a scapegoat and so created vague and shifting definitions of who this internal enemy was: a reification fallacy.

A final example clearly shows the corrosive effect of motivation on the language used to describe measurements. The New York Times reports in [12] that a US drone strike early in the Obama administration resulted in civilian deaths, which was sharply questioned by the new president. The motivation is clear: civilian deaths are a tragedy, and should be avoided for moral and political reasons. However, the weapons in use are not discriminating enough to sort out militants from civilians when the bomb detonates, so this creates a tension between reality and motivation. The solution reported in the article was to modify the nominal reality:



[The new definition] in effect counts all military-age males in a strike zone as combatants, according to several administration officials, unless there is explicit intelligence posthumously proving them innocent.




So the burden of a reality-audit is placed on proving a victim to be a civilian. I assume there is not much motivation to make that attempt. It’s a convenient way to ignore complicated and unpleasant realities.

Conclusions

Self-modifying intelligent systems will always rely on imperfect representations of reality. Sometimes these can be intentionally controlled, as with the chip-making example. Motivation is supposed to come into play only after measurements occur, but when this is not the case it creates vulnerabilities and demonstrates that whatever reality audits are in place are inadequate.

Perfect reality auditing may be logically and empirically impossible. Certainly, the large bureaucracies we live in are riddled with self-fulfilling definitions and measurements, and we have no evidence that these systems can last very long. Nor are there any implementations or even designs for theoretical systems, such as artificial intelligence, that would provably overcome this difficulty.

But even if perfection is unobtainable, there is much we can do to better audit our bureaucracies and other designed systems to improve longevity. We just have to be strongly motivated not to fool ourselves.

 

 

 



References

 



[1] Eubanks, D. (2012, March 10) Is intelligence self-limiting?” Institute for Ethics and Emerging Technologies, Retrieved from http://ieet.org/index.php/IEET/more/eubanks20120310

[2] Eubanks, D. (2012, April 2) Self-limiting intelligence: Truth or consequences”  Institute for Ethics and Emerging Technologies, Retrieved from http://ieet.org/index.php/IEET/more/eubanks20120402

[3] Ramachandran, V. S., (2013). The tell-tale brain. W W Norton & Co Inc.

[4] Hoffman, D. (2009). The interface theory of perception: Natural selection drives true perception to swift extinction. In Dickinson, Leonardis, Schiele & Tarr (Eds.), Object Categorization: Computer and Human Vision

Perspectives (pp. 148-265). Retrieved from http://www.cogsci.uci.edu/~ddhoff/interface.pdf

[5] Joneitz, E. (2012, April). Tr10: Probabilistic chips.Technology Review, Retrieved from http://www.technologyreview.com/energy/20246/

[6] Carlone, R. V. (1992, Febr 04). fas.org. Retrieved from http://www.fas.org/spp/starwars/gao/im92026.htm

[7] Kafka, B. (2012, May 21). The administration of things: A genealogy. West 86th, Retrieved from http://www.west86th.bgc.bard.edu/articles/the-administration-of-things.html

[8] Zimmer, C. (2012, April 16). A sharp rise in retractions prompts calls for reform. The New York Times, Retrieved from http://www.nytimes.com/2012/04/17/science/rise-in-scientific-journal-retractions-prompts-calls-for-reform.html?pagewanted=all

[9] Samenow, J. (2012, April 18). Senate bill proposes moving satellite programs from noaa to nasa. The Washington Post. Retrieved from http://www.washingtonpost.com/blogs/capital-weather-gang/post/senate-bill-proposes-moving-satellite-programs-from-noaa-to-nasa/2012/04/18/gIQA2p9jQT_blog.html

[10] Nc-20. (n.d.). Retrieved from http://www.nc-20.com/

[11] Titus, J., & Richman, C. (2001). Maps of lands vulnerable to sea level rise: modeled elevations along the u.s. atlantic and gulf coasts. Climate Research, 18, 205-228. Retrieved from http://www.int-res.com/articles/cr/18/c018p205.pdf

[12] Becker, J., & Shane, S. (29 , May 20). Secret ‘kill list’ proves a test of Obama’s principles and will. . Retrieved from http://www.nytimes.com/2012/05/29/world/obamas-leadership-in-war-on-al-qaeda.html