Deep(ly) Unsettling: The ubiquitous, unspoken business model of AI-induced mental illness
Marcelo Rinesi
2018-01-08 00:00:00
URL

The emotionally and politically toxic effects of the ecosystem of platforms like Facebook and Twitter, together with the organizations leveraging them, might not be their intended goals, but they aren't accidents either. If you configure a data-driven system to learn the best way to induce users to stay on the platform and interact with it and its advertisers, it'll simply do that. It just so happens that the ideally compulsive, engaged user of a game and or a social network, the one every algorithm is continuously trying to train by what content and rewards it offers, isn't the emotionally healthy one.

Maximizing engagement is the explicit optimization goal of contemporary online businesses. They just rediscovered and implemented, quickly and efficiently, the time-honored tools of compulsive gambling, gaslighting, and continuous emotional manipulation. They aren't tools that make the user mentally healthier, quite the opposite, but nobody programmed to algorithms to even measure, much less take into account, this side effect.

And the impacts they've had so far have been achieved with technology that's already conceptually obsolete. Picture the greatest chess player in history, retrained using the knowledge of the day-to-day experience and reactions of billions of people into the world's most effective and least ethical behavioral therapist, fed in real time every scrap of information available about you, constantly interacting with each digital device, service, and information source you are in direct or indirectly contact with, capable of choosing what it's suggested to you to see and do — even of making up whatever text, audio and video it thinks it'll work best — and dedicated exclusively to shaping your emotions and understanding of the world, with no regard at all for your well-being, according to the preferences of whoever or whatever is paying it the most at the moment or is best exploiting its own technological vulnerabilities.

Rephrased in an allegorical way, it could be an updated version of one of Philip K. Dick's Gnostic nightmares. A video designed by a superhumanly capable AI to exploit every one of your emotional weak spots — a murder victim with a face that reminds you of a loved one, a politician's voice slightly remodulated to make it subliminally loathsome, a caption that casually inserts an indirect reference to a personal tragedy at the exact moment of the day where you're most tired and your defenses at their lowest — wouldn't be out of place in one of his stories, but it's also just a few years' away from being technologically feasible, and very explicitly in the industry's R&D roadmap. Change the words used to describe it, changing nothing of what it describes, and it's a pitch Silicon Valley investors hear a dozen times a month.

It'd be absurd to pretend we've always been sane and well-informed. Every form of media carries opportunities for both information and manipulation, for smarter societies and collective insanity. But getting things right is always a challenge. This one is ours, and it might be one of the most difficult we have ever faced. The amount of information and sheer cognitive power bent on manipulating each of us, individually, at any given minute of the day is growing exponentially, and our individual and collective ability to cope with these attempts certainly isn't. Whether and how we react to this will be a subtle but powerful driver of our societies for decades to come.