Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Innovation Ecosystems in Emerging Economies

A Viral New World Disorder

Combatting Ebola: Moving beyond the hype

Procedural Due Process and the Dangers of Predictive Analytics

The Future of Robotic Automated Labor

Consciousness and Neuroscience


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

Kris Notaro on 'A Viral New World Disorder' (Oct 25, 2014)

Kris Notaro on 'The Future of Robotic Automated Labor' (Oct 25, 2014)

instamatic on 'Why “Why Transhumanism Won’t Work” Won’t Work' (Oct 24, 2014)

Abolitionist on 'Is using nano silver to treat Ebola misguided?' (Oct 24, 2014)

cacarr on 'Book review: Nick Bostrom's "Superintelligence"' (Oct 24, 2014)

jasoncstone on 'Ray Kurzweil, Google's Director Of Engineering, Wants To Bring The Dead Back To Life' (Oct 22, 2014)

pacificmaelstrom on 'Why “Why Transhumanism Won’t Work” Won’t Work' (Oct 21, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Google’s Cold Betrayal of the Internet
Oct 10, 2014
(7585) Hits
(2) Comments

Should we abolish work?
Oct 3, 2014
(5213) Hits
(1) Comments

The Future As History
Oct 12, 2014
(4516) Hits
(0) Comments

Transhumanism and Politics
Oct 7, 2014
(4424) Hits
(0) Comments



IEET > Security > SciTech > Rights > Privacy > Economic > Life > Access > Innovation > Vision > Philosophy > Technoprogressivism > Affiliate Scholar > John Danaher

Print Email permalink (4) Comments (13581) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Rule by Algorithm? Big Data and the Threat of Algocracy


John Danaher
By John Danaher
Philosophical Disquisitions

Posted: Jan 7, 2014

An increasing number of people are worried about the way in which our data is being mined by governments and corporations. One of these people is Evgeny Morozov. In an article that appeared in the MIT Technology Review back in October 2013, he argued that this trend poses a serious threat to democracy, one that should be resisted through political activism and “sabotage”. As it happens, I have written about similar threats to democracy myself in the past, so I was interested to see how Morozov defended his view.

Unfortunately, Morozov’s article is not written in a style that renders its argumentative structure immediately transparent. I shouldn’t complain: not everyone writes in the style of an analytic philosopher; not everyone aspires to reduce human language to a series of number propositions and conclusions. Nevertheless, I have set myself the task re-presenting Morozov’s argument in a more formal garb, and subjecting it to some critical scrutiny. That's what this blog post is about.

The discussion is broken-down into three sections. First, I’ll talk in general terms about the problem Morozov sees with data-mining technologies. Second, I present what I take to be Morozov’s central argument, which I call the argument from the threat of algocracy. This, I suggest, is similar to an argument found in the work of the political philosopher David Estlund. Finally, I look at the suggested solutions to the problem, noting how Morozov’s solution differs from one that I have defended in the past.


1. The Problem of Invisible Barbed Wire
The way in which our data is being monitored and processed has been well-documented. A recent article by Alice Marwick in the NY Review of Books gives a good overview of the phenomenon. Interested readers should check it out. What follows here is just a summary.

In brief, modern technology has made it possible for pretty much all of our movements, particularly those we make “online”, to be monitored, tracked, processed, and leveraged. We can do some of this leveraging ourselves — as do the proponents of self-experiments and members of the quantified self movement — by tracking our behaviour along various metrics and using this to improve our diets, increase our productivity and so forth. To many, this is a laudatory and valuable endeavour.

But, of course, governments and corporations can also take advantage of these data-tracking and processing technologies and use it to pursue their ends. Governments can do it to monitor terrorist activity, track welfare fraud, and prevent tax evasion. Companies can do it to track consumer preferences, create targetted advertising and manipulate consumer purchasing decisions.

You might think that this is all to the good. Who doesn’t want to stop terrorism, stamp out tax evasion and have a more pleasant shopping experience? But Morozov points out that it has a more sinister side. He worries about the role of data-mining in creating a system of algorithmic regulation, one in which our decisions are “nudged” in particular directions by powerful data-processing algorithms. This is worrisome because the rational basis of these algorithms will not be transparent. As he puts it himself:
 

Thanks to smartphones or Google Glass, we can now be pinged whenever we are about to do something stupid, unhealthy or unsound. We wouldn’t necessarily need to know why the action would be wrong: the system’s algorithms do the moral calculus on their own. Citizens take on the role of information machines that feed the techno-bureaucratic complex with our data. And why wouldn’t we, if we are promised slimmer waistlines, cleaner air, or longer (and safer) lives in return? 
(Morozov, 2013)


In other words, the algorithms take over from the messy, human process of democratic decision-making. Citizens become beholden to them, unsure of how they work, but afraid to disregard their guidance. This creates a sort of prison of “invisible barbed wire” which constrains our intellectual and moral development, as well as our lives more generally:
 

[The problem] here is the construction of what I call “invisible barbed wire” around our intellectual and social lives. Big data, with its many interconnected databases that feed on information and algorithms of dubious provenance impose severe constraints on how we mature politically and socially…  
The invisible barbed wire of big data limits our lives to a space that might look quiet and enticing enough, but is not of our own choosing and that we cannot rebuild or expand. The worst part is that we do not see it as such. Because we believe that we are free to go anywhere, the barbed wire remains invisible. 
(Morozov, 2013)


The upshot: big data is undermining democracy by depriving us of our ability to think for ourselves, determine our own path in life, and critically engage with governmental decision-making.


2. The Argument from the Threat of Algo-cracy
Morozov’s argument is clothed in some ominous language and clever metaphors (yes: I like the point about “invisible barbed wire”) but for all that it will be familiar to students of political philosophy. People have long worried about the prospect of epistemic elites taking over governmental decision-making, just as people have also long argued in favour of such takeover (Plato and John Stuart Mill being two who spring to mind).

Morozov’s argument is the argument from the threat of algocracy (i.e. rule by algorithm). At the heart of this argument is the debate about what legitimates governmental decision-making in the first place. What gives governments the right to set policies and impose limits on our behaviour? What makes a system of government just and worthwhile? Generally speaking, there are three schools of thought on the matter:
 

Instrumentalists: hold that legitimation derives from outcomes. Policies and regulations are designed to accomplish certain goals (in moral terms: they are designed to help us realise fundamental human goods like joy, sociality, friendship, knowledge, freedom etc.). They are legitimate if (and only if) they accomplish those goals.
Proceduralists: hold that legitimation comes from the properties of the decision-making procedures themselves. They defend their view by arguing that we have no idea what the correct mix of regulations is in advance. What matters is that the procedure through which these regulations are adopted is itself just; that it gives people the opportunity to voice their concerns; that it is comprehensible to them; that it gives due weight to their concerns; and so forth.
Pluralists: hold that some combination of just procedures and good outcomes is needed.


Democratic governance is often defended on pluralist grounds, although some theorists emphasise the instrumental goods over the procedural ones, and vice versa. The general idea being that democratic governance has just procedures and leads to good outcomes, at least more so than alternative systems of governance.

The problem with justifying governance along purely instrumentalist lines is that it can give rise to the threat of epistocracy. This threat has been most clearly articulated by David Estlund. As he sees it, if it is really the case that procedure-independent goods legitimate systems of government, it is likely that that some people (the epistemic elites) have better knowledge or foresight of the policies that will lead to those outcomes than others. Indeed, some of these others (the epistemically incompetent) might actively thwart or undermine the pursuit of the procedure-independent goods. Consequently, and following the instrumentalist logic, we should hand over decision-making control to these epistemic elites.

This looks to be an unwelcome conclusion. It seems to reduce the people affected by governmental decisions to little more than moral patients: receptacles of welfare and other positive outcomes, who do not actively shape and determine the course of their own lives. Hence people tend to fall back on pluralist and/or proceduralist approaches to legitimacy, which value the active contribution to and participation in decision-making. (I should add, here, that people can also resolve the problem by appealing to freedom or self-determination as one of the procedure-independent goods toward which governance should be directed).

Morozov’s compaint is essentially the same as that of Estlund. The only difference being that where Estlund is concerned about human epistemic elites (like experts and other technocrats), Morozov is concerned about algorithms, i.e. non-human artificial intelligences that mine and manipulate our data. Morozov couches his argument in terms of the threat to democracy, but what he is specifically talking about are both procedural goods associated with democratic governance (comprehensibility, participation, deliberation) and the other goods that should be protected by such procedures (freedom, self-determination, autonomy etc.). His concern is that over-reliance on data-mining algorithms will undermine these goods, even if at the same time they help us to achieve certain others.

To phrase his argument in slightly more formal terms:
 

The Argument from the Threat of Algocracy 
  • (1) Legitimate democratic governance requires decision-making procedures that protect freedom and autonomy, and that allow for individual participation in and comprehension of those decision-making procedures.  
  • (2) Reliance on data-mining algorithms undermines freedom and autonomy and prevents active participation in and comprehension of decision-making procedures.  
  • (3) Therefore, reliance on data-mining algorithms is a threat to democratic governance.


This argument relies on a normative premise (1) and a factual/predictive premise (2). I think the normative premise is reasonably sound. Indeed, I would add in the explicit claim — implied in Morozov’s article — that this form of governance is desirable, all else being equal. The only question then is whether all else is indeed equal. Proponents of algocracy — like proponents of epistocracy — could argue that the procedure-independent gains from algorithmic policy-making will outweigh the procedural and autonomy-related costs. Morozov needs it to be the case that the goods singled out by his argument are more important than any of those putative gains (assuming he argues from consequentialism) or that their impingement is blocked by deontological constraints. I think he would have a strong argument to make on consequentialist grounds; I’m less inclined to the deontological view.

I suspect that the factual/predictive premise is going to prove more controversial. Is it true that the widespread use of data-mining algorithms will have the kind of negative impact envisioned by Morozov? It certainly seems to be true that the majority people don’t comprehend the basis on which algorithms make their decisions. Whether this must be the case is less clear to me. I’m not well-versed in how modern algorithms are coded. Still, I suspect there is a good case to be made for this. That is to say: I suspect algorithms throw up many surprises, even for the engineers who create them, and will continue to do so, particularly as they become more complex.

A slightly more tricky issue has to do with the potentially autonomy-enhancing effects of data-mining technologies. The message from fans of self-experimentation and self-quantification, who are often obsessed with mining data about themselves, seems to be that technologies of this sort can greatly enhance the control one has over one’s life. They can lead to greater self-awareness and goal fulfillment, allowing us to pick and choose the behavioural strategies that lead to the best outcomes for us. That would seem to suggest that premise (2) is false in certain instances.

But that’s not to say that it is generally false. Morozov is probably right in thinking that top-down use of such technologies by governments and corporations is a problem. Nevertheless, the experiences of these individual users does suggest that there is a way in which the technology could be harnessed in a positive manner. Morozov is aware of this argument, and resists it by arguing that the forces of capitalism and bureaucracy are such that the top-down uses will tend to dominate the bottom-up uses. This seems to be what is happening right now and this is what is important.


3. Solutions?
So what is to be done about the threat of algocracy? Morozov complains that current solutions to data-mining miss the point. They conceive of the problem in terms of the right to privacy, and hence craft solutions that give people greater control over their data, either through robust legal protections of that data, or market-based systems for owning and trading in that data (the suggestion in the article is that market-based solutions are distinct from legal ones, though this is clearly not the case: the kinds of market-based solution that are being advocated rely on the law of property and contract).

​Instead, Morozov — in a section titled “Sabotage the System. Provoke more Questions” — argues for a four-pronged solution (there is some overlap between the prongs):
 

A. Politicise the problem: we must, as he puts it “articulat[e] the existence — and profound political consequences — of the invisible barbed wire”. We must expose the “occasionally antidemocratic character” of big data.
B. Learn how to sabotage the system: we must arm ourselves by resisting the trend toward self-monitoring and refusing to make money off our own data. “Privacy can then reemerge as a political instrument for keeping the spirit of democracy alive”.
C. Create “provocative digital services”: we must facilitate technologies that allow us to see the negative effects of data-mining. For example “instead of yet another app that could tell us how much money we can save by monitoring our exercise routine, we need an app that could tell us how many are likely to lose health insurance if the insurance industry has as much data as the NSA.”
D. Abandon preconceptions: we must cast-off any preconceptions we may have about how our digital services work and interconnect.


In short, what we need is some good old-fashioned, consciousness-raising and political activism.

I think there is merit to this. I think we could be more critical about the uses of technology. That’s why I find Morozov’s brand of anti-techno-utopianism somewhat refreshing. But I definitely worry that this is all just a bit too idealistic. If the forces of capitalism and bureaucracy are as powerful as Morozov seems to suggest elsewhere in the article, is some bottom-up activism going to be enough to overturn it?

In my article “On the Need for Epistemic Enhancement: Democratic Legitimacy and the Enhancement Project”, I suggested that broader use of enhancement technologies could help stave off the threat of epistocracy. I’m not convinced I’m correct about this, incidentally, but it does prompt the thought: would broader use of enhancement technologies also help stave off the threat of algocracy?

Probably not. Enhancement technologies might help to equalise the balance of power between ordinary humans and epistemically elite humans. Whether they could do the same for the balance of power between humans and algorithms is another matter. The gap is much wider. AIs have significant advantages over humans when it comes to data-mining and processing. The only kinds of enhancement technology that could bridge the gap are the truly radical (and hypothetical) forms. The ones that would give rise to a genuinely posthuman form of existence. Even if this form of existence is possible, concepts like freedom and self-determination may no longer have any meaning in such a world. Now there’s a thought… 


John Danaher holds a PhD from University College Cork (Ireland) and is currently a lecturer in law at NUI Galway (Ireland). His research interests are eclectic, ranging broadly from philosophy of religion to legal theory, with particular interests in human enhancement and neuroethics. John blogs at http://philosophicaldisquisitions.blogspot.com/. You can follow him on twitter @JohnDanaher.
Print Email permalink (4) Comments (13582) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


I find Morozov refreshing as well.

On a more personal and non-political level- one of the problems I see with this surge in algorithms is the way it reduces a person to a piece of predictive software and I wonder if some of the accidental aspects that used to play a large role in making us unique individuals might be lost.

I think there are some easy things we could do to subvert this such as exchanging Netflix, Amazon, Pandora etc accounts with others for a time just to get us out of our software imposed box.

Additionally, people talk about transparency but what we really need to know to gain back some level of self-determination is how we are identified by the algorithms we interact with or better be given the power to change and design our own so that we define who we are and what to be and not some software engineer with a lightning fast super computer or AI.





I think that the struggle of the individual vs. the group is going to inevitably be central to the future.  Mr Danaher seems to favor the individual and equates democracy to individual freedom.  I, on the other hand, equate democracy to the survival of the group, which is existentially threatened by the unfettered individual.  You see, in a high technology society, individuals have the power to destroy the group, while Mr Danaher seems only to see the threat in a high technology society of groups fettering the individual.  If his prescription was to be followed, the group would inevitably eventually be destroyed.

Understandable, our political culture in America is based upon when the US was a young low technology nation, and the frontier offered the individual vast freedom to live life the way they wanted.  That was the nation and society that the Founding Fathers crafted for.  The Constitution and democracy is not a suicide pact - instead it is a living thing that changes with the circumstance.  Unfortunately, some people are intoxicated with the concept of maximum individual liberty, even at the expense of the nation, society, and even existential threats.

If Mr Danaher wants to have maximal individual liberty, he can forfeit the benefits he personally gains by living in a high technology society, and go off into the wilderness like the Nobel Savage in the book Brave New World.  I can assure him that our leaders will take every precaution to protect the group from existential threats from individuals, so our nation’s bedrock values endure for future generations.





Morozov: “This is the future we are sleepwalking into. Everything seems to work, and things might even be getting better—it’s just that we don’t know exactly why or how.”  A very cogent description of a subtle and interesting failure mode. His subsequent discussion of rights and and contradictions is certainly an interesting one, well-worth reading.

Alas, Morozov then gloms onto a “solution” based on concealment, obscurity and hiding—one that cannot possibly work. Like nearly every seer in this benighted field, he absolutely refuses to consider how there might be transparency and accountability-based solutions that work *with* unstoppable trends toward a world awash in light, rather than raging against the tide.

He buys into Jaron Lanier’s notion of each person having commercial “interest” in their own information and a right to allocate it for profit or personal benefit, which certainly is an improvement over the fantasy of a legal “right” to conceal your information and to punish those who have it, a stunning delusion in a world of limitless leaks.  Lanier’s notion is certainly a step forward—instead of prescribing futile and delusional shrouds, it envisions a largely open world in which we all get to share in the benefits that large entities like corporations derive from our information.

Except that “our information” is also a delusion that will fray and unravel with time, leaving us with what is practical, what matters… how to maintain control NOT over what others know about us, but what they can DO to us.  And to accomplish that, we must know as much about the mighty as they know about us.

Alas, after an interesting discussion, Morozov devolves down to this: “we must learn how to sabotage the system—perhaps by refusing to self-track at all. If refusing to record our calorie intake or our whereabouts is the only way to get policy makers to address the structural causes of problems like obesity or climate change—and not just tinker with their symptoms through nudging—information boycotts might be justifiable.”

This notion, that any measures taken by private persons will even slightly inconvenience society’s elites (of government, corporatcy, oligarchy etc) from being able to surveil us,  would be charming naivete if it weren’t a nearly universal and dangerous hallucination. In The Transparent Society I discuss the alternative we seldom see talked-about, even though it is precisely the prescription that got us our current renaissance of freedom and empowered citizenship.  Sousveillance. Embracing the power to look-back and helping our neighbors to do it, as well.

I agree with Morozov about the need for “provocative services” where he almost seems to get the core idea, that we can solve most of these problems through open, polite and fair confrontation, of the sort that teaches people to behave like adults.  An actual proposal for how such systems of dispute resolution through competitive opposition might work can be found here. http://www.amazon.com/Disputation-Arenas-ebook/dp/B005AK2R74/?_encoding=UTF8&tag=contbrin-20





David,

The transparency which is really needed is not the kind of transparency which makes people visible to each other and algorithms, but rather the kind of transparency which makes algorithms visible to people. The ‘unstoppable trend’ is in the wrong direction towards total surveillance for people and total invisibility for technocratic systems.

What is needed is not more visual or audio sousveillance. Cameras do very little to help us comprehend technocratic systems of governance. Rather we need to remove the veil of corporate secrecy. For example through citizen auditing as you yourself suggest in your book; through removing commercial confidentiality protections; or through laws which require that any algorithm or process which governs society or effects the life chances and choices of people must be published.

In short we need to turn pay the same attention to algorithmic authority and corporate authority as we do to human authority. We need to pay the same attention to inequity, power imbalances, exclusion, economic and legal oppression as we do to crime and physical violence. Sousveillance proponents like yourself should be leading this charge - making the real authority of cybernetic systems visible - rather than obsessing about catching cops, criminals or politicians on camera.

The irony is that people like yourself and Steve Mann who saw into the future and predicted the rise of the surveillance society did not also see that the real threat was not from some human elite but from algorithms, corporations and other cybernetic systems. That in transparent society all the people could be surveilled but the real source of power, the techno-social system itself would become even less visible.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Personhood Beyond the Human: On Animal - Human - Corporation - Robot

Previous entry: The Singularity promises great benefits, but can we brave the risks

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376