The Care and Feeding of Your AI Overlord
Marcelo Rinesi
2010-11-22 00:00:00
URL

I'm talking, of course, of the financial markets.

The opening paragraph was not metaphorical. Financial markets might not match pop culture expectations of what an AI should look like — there are no red unblinking eyes, nor mechanically enunciated discourses about the obsolesence of organic life — and they might not be self-aware (although that would make an interesting premise for a SF story), but they are the largest, most complex, and most powerful (in both the computer science and political senses of the word) resource allocation system known to history, and inarguably a first-order actor in contemporary civilization.

If you are worried about the impact of future vast and powerful non-human intelligences, this might give you some ease: we are still here. Societies connected in useful ways to "The Market" (an imprecise and excessively anthropomorphic construct) or subsections thereof are generally wealthier and happier than those than aren't. Adam Smith's model of massively distributed economic calculations based on individual self-interest has more often than not surpassed in effectivity competing models of centralized resource allocation.

But if you are impatiently waiting for future vast and powerful non-human intelligences, a measure of worry applies. All algorithms and heuristics (whether designed, evolved, or half-and-half) have assumptions, necessary conditions, and ranges of application, outside of which they start giving wrong answers, sometimes pathologically so. Economists — the subset of computer scientists and psychologists that deal with distributed resource allocation algorithms — are well aware of the multiple ways in which markets can and do fail, but, and this is a really significant but, when an AI becomes powerful enough to be perceived as critical for the functioning of a society, ideology and politics trump engineering. We just don't have a society that is instinctively well-versed in large-scale software engineering, not in the way that it instinctively understands hierarchical status politics, and both politicians and voters (in all ranges of economic influence) are prone to exaggerate, misinterpret, accept uncritically, attack irrationally, or plain fail to understand the workings of the massive AI that runs so much of our civilization.

Not unexpectedly, the result is a mess. Bugs don't get fixed, new bugs are introduced, beneficial use cases are ignored, and when the system malfunctions, even those most directly and negatively impacted don't quite know how to fix it, or focus on entirely irrelevant aspects. Even defenders of "the market" often do so without an understanding of why and when it works (which would also give them an understanding of when it doesn't), so even technically necessary patches are seen as destructive attacks.

In a way, powerful artificial intelligences that are self-aware and can talk, whether megalomaniac or not, would be easier for us to deal with. From gods to kings to CEOs, humankind is used to interacting with political entities in positions of power. But those are not the kind of artificial intelligences we are effectively building. Whether they allocate our financial resources, manage the power grid, or route our collective online attention, these quite literally superhumanly capable intelligences are much more impersonal, yet more pervasive and influential, than the Skynets and HALs of fiction. With their (mostly unplanned) creation, we seem to have breached some sort of sociological barrier. It's in our best interests, and over time it might come to be necessary to our prosperity or even survival, to understand their adequate development, beneficial usage, and proper maintenance.