The mystical underpinnings of Facebook's anti-fake news algorithms
Marcelo Rinesi
2018-05-02 00:00:00
URL

It's not a moral failure, it's just that you've built Facebook into the world's largest integrated content distribution machine, and convincing people of things that aren't true is where the money is. So, how will you synthesize a truthful news feed about the world from the reports of people who are as likely to be trying to deceive you as not?

You can't. There's no algorithm that'll allow you, on its own, to reverse-engineer truth from an active opponent that controls, directly or indirectly, what you see. The original Descartes got away from this problem through a theological leap of faith, but that's not going to help you here.

For all the quantitative processes that make the practice of science possible and fruitful, the roots of it are, fundamentally, social. Science doesn't work without scientists, people who, as a group, are socially and personally committed to, basically, not trying to con you into believe something they know is false. Algorithms, heuristics, big data sets, and all of the other machinery is meant to deal with errors, shortcomings, and the occasional bad egg, but not with systematic active deception from an entire sub-community. To try to extract empirical non-mathematical truth from fundamentally suspect numbers is an exercise in numerology in its most mystical sense.

In order to deal with fake news, Facebook has to engage with other actors that can be trusted to have some sort of epistemological commitment to truth. The problem, of course, is that people interested in pushing specific non-true ideas (e.g. that climate change isn't a thing) have managed, after some systematic work amplifying other people's non-factual epistemological commitments, to paint as suspect the entire social machinery of truth-seeking (e.g. that climatology is a giant worldwide hoax). Facebook can't feed their algorithms with mostly non-adversarial (non-demonic, the Descartes would say) without making an active choice as to what organizations and processes are relatively trustworthy. It can't leave that choice to "society", because for most aspects of the world that matter there's a significantly motivated and funded side of society that has gone off the factual rails.

Can algorithms help? Yes. Can it be done, at the speed and scale required by Facebook's unique position in many societies, without algorithms? No. But algorithms are essentially fast, scalable ways of implementing an epistemological choice (who are you going to believe about what's around Jupiter, philosophy books or your lying eyes?), not magical oracles that make that choice for you.

It's not impossibly hard — it's also what journalists do when they evaluate their sources for both their likelihood of lying and their likelihood of having factual knowledge about facts the journalist wants to report on — but it's a choice that was deemed political at the time that Galileo made it, and it has never ceased to be political since then.

Facebook's position of we're a social network, our job is to help people communicate with each other, not to help them know true things is understandable in the abstract; its concrete moral validity, as it's always the case, depends on context and consequences. History is filled with atrocities made possible because people and organizations didn't make choices framed as political that weren't part of their basic business model, but that should've been part of a more basic moral boundaries. Facebook's history already contains atrocities — not just problematic political developments, but literal mass killings — made possible by their choice not to make one. No algorithmic sophistication will get you out of making that choice, or of your the responsibility for the consequences.