Yuval Harari Drinks the Kool Aid
Rick Searle
2017-10-30 00:00:00
URL

In this authors, to the extent they are still read, or even just talked about, play the role formerly occupied by prophets or Oracles. Such authorial prophecy is  a role rapidly disappearing, to be replaced, many predict, by artificial intelligence and big data. It probably won’t matter much. Neither are very good at predicting the future anyway.

A prophetic book badly timed doesn’t mean it’s analysis is wrong, but perhaps just premature. Yuval Harari’s Homo Deus: A Brief History of Tomorrow is either one or the other. It’s either badly timed and right because it’s premature, or badly timed and wrong because its analysis is deeply flawed.

For those who haven’t read the book, or as a reminder for those who have, Harari’s essential point in Homo Deus is that “Having secured unprecedented levels of prosperity, wealth and harmony, and given our past record and our current values, humanity’s next targets are likely to be immortality, happiness and divinity.” (21) Harari believes this even if he seems to doubt the wisdom of such goals, and even in light of the fact that he admits this same humanity is facing ecological catastrophe and a crisis of ever mounting inequality between, if not within, societies.

The fact that Harari could draw this conclusion regarding what humanity should do next stems from the fact that he sees liberal humanism as the only real game left in town. He sees the revanche de deus in the Middle East and elsewhere as little but a sideshow, the real future of religion is now being forged in Silicon Valley.

Liberal humanism he defines as a twofold belief which on the one side suggests human sovereignty over nature, and on the other, that the only truth, other than the hard truths of science which such humanism believes in, is the truth that emerges from within the individual herself.

It is this reliance upon the emotions welling up from the self which Harari believes will ultimately be undone by the application of the discovery of science, which Harari holds is that, at rock bottom, the individual is nothing but “algorithms”. Once artificial algorithms are perfected they will be able to know the individual better than that individual knows herself. Liberal humanism will then give way to what Harari calls “Dataism”.

Harari’s timing proved to be horribly wrong because almost the moment proclaimed the victory of Liberal humanism all of its supposedly dead rivals, on both the right (especially) and the left (which included a renewed prospect of nuclear war) seemed to spring zombie-like from the grave as if to show that word of their demise had been greatly exaggerated. Of course, all of these rivals (to mix my undead metaphors) were merely mummified versions of early 20th century collective insanities, which meant they were also forms of humanism. Whether one chose to call them illiberal humanisms or variants of in-humanism being a matter of taste, all continued to have the human as their starting point.

Yet at the same time nature herself seemed determined to put paid to the idea that any supposed transcendence of humanity over nature had occurred in the first place. The sheer insignificance of human societies in the face of storms where an “average hurricane’s wind energy equals about half of the world’s electricity production in a year. The energy it releases as it forms clouds is 200 times the world’s annual electricity use,” and “The heat energy of a fully formed hurricane is “equivalent to a 10-megaton nuclear bomb exploding every 20 minutes,”  has recently been made all too clear. The idea that we’ve achieved the god-like status of reigning supreme over nature isn’t only a fantasy, it’s proving to be an increasingly dangerous one.

That said, Harari remains a compassionate thinker. He’s no Steven Pinker brushing under the rug past and present human and animal suffering so he can make his case that things have never been better.  Also, unlike Pinker and his fellow travelers convinced of the notion of liberal progress, Harari maintains his sense of the tragic. Sure, 21st century peoples will achieve the world humanists have dreamed of since the Renaissance, but such a victory, he predicts, will prove Pyrrhic. Such individuals freed from the fear of scarcity, emotional pain, and perhaps even death itself, will soon afterward find themselves reduced to puppets with artificial intelligence pulling the strings.

Harari has drank the Silicon Valley Kool Aid. His cup may be half empty when compared to that of other prophets of big data whose juice is pouring over the styrofoam edge, but it’s the same drink just the same.

Here’s Harrai manifesting all of his charm as a writer on this coming Dataism in all its artificial saccharine glory:



“Many of us would be happy to transfer much of our decision making processes into the hands of such a system, or at least consult with it whenever we make important choices. Google will advise us which movie to see, where to go on holiday, what to study in college, which job offer to accept, and even whom to date and marry. ‘Listen Google’, I will say ‘both John and Paul are courting me. I like both of them, but in different ways, and it’s so hard for me to make up my mind. Given everything you know, what do you advise me to do?’



And Google will answer: ‘Well, I’ve known you since the day you were born. I have read all your emails, recorded all your phone calls, and know your favorite films, your DNA and the entire biometric history of your heart. I have exact data about each date you went on, and, if you want, I can show you second-by-second graphs of your heart rate, blood pressure and sugar levels whenever you went on a date with John or Paul. If necessary, I can even provide you with an accurate mathematical ranking of every sexual encounter you had with either of them. And naturally, I know them as well as I know you. Based on all this information, on my superb algorithms, and on decade’s worth of statistics about millions of relationships- I advise you to go with John, with an 87 percent probability that you will be more satisfied with him in the long run.” (342)

Though at times in Homo Deus Harari seems distressed by his own predictions, in the quote above he might as well be writing an advertisement for Google. Here he merely echoes the hype for the company expressed by Executive Chairman of Alphabet (Google’s parent company), Eric Schmidt. It was Schmidt who gave us such descriptions of what Google’s ultimate aims were as:



We don’t need you to type at all because we know where you are. We know where you’ve been. We can more or less guess what you’re thinking about.

And that the limits on how far into the lives of its customers the company would peer, would be “to get right up to the creepy line and not cross it”. In the pre-Snowden Silicon Valley salad days Schmidt had also dryly observed:



If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.

It’s not that Harari is wrong in suggesting that entities such as Google won’t continue to use technology to get right under their customer’s skin, it’s that he takes their claims to know us better than we know ourselves, or at least be on the road to such knowledge, as something other than extremely clever PR.

My doubts about Google et al’s potential to achieve the omnipotence of Laplace’s Demon doesn't stem from any romantic commitment to human emotions but from the science of emotion itself. As the cognitive neuroscientist, Lisa Feldman Barrett has been vocally trying to inform a public suffused with antiquated notions about how the brain actually works: physiologists have never been able to discover a direct correlation between a bodily state and a perceived emotion. A reported emotion, like anger, will not just manifest itself in a physiologically distinct way in two different individuals, at different times anger can physiologically manifest itself differently in the same individual.

Barrett also draws our attention to the fact that there is little evidence that particular areas of the brain are responsible for a specific emotion, implying, to my lights, that much of current fMRI scanning based on blood flows and the like may face the same fate as phrenology.

Thus the kinds of passive “biometric monitoring” Harari depicts seems unlikely to lead to an AI that can see into a person’s soul in the way he assumes, which doesn’t mean algorithmic-centric corporations won’t do their damnedest to make us think they can do just that. And many individuals probably will flatten and distort aspects of life that do not lend themselves to quantification in a quixotic quest for certainty, flattening their pocketbooks at the same time.

True believers in the “quantified self” will likely be fooled into obsessive self-measurement by the success of such methods in sports along with the increasing application to them of such neo-Taylorist methods in the workplace. Yet, while perfecting one’s long-short technique, or improving at some routine task, are easily reducible to metrics, most of life, and almost all of the interesting parts about living are not. A person who believed in his AI’s “87 percent probability” would likely think they are dealing with science when in reality they are confronting a 21st-century version of the Oracle at Delphi, sadly minus the hallucinogens.

Even were we able to reach deep inside the brain to determine the wishes and needs of our “true selves”, we’d still be left with these conundrums. The decisions of an invasive AI that could override our emotions would either leave us feeling that we had surrendered our free will to become mere puppets, or would be indistinguishable from the biologically evolved emotional self we were trying to usurp. For the fact of the matter is the emotions we so often confuse with the self are nothing but the unending wave of internal contentment and desire that oscillates since the day we are born. As a good Buddhist Harari should know this. Personhood consists not in this ebb and flow, but emerges as a consequence of our commitments and life projects, and they remain real commitments and legitimate projects only to the extent we are free to break or abandon them.

Harari’s central assumption in Homo Deus, that humanity is on the verge of obtaining God-like certainty and control, is, of course, a social property much more so than civilization’s longed for gift to individuals. The same kind of sovereignty he predicts individuals will gain over the contingencies of existence and their biology he believes they will collectively exercise over nature itself. Yet even collectively and at the global scale such control is an illusion.

The truth implied in the idea of the Anthropocene is not that humanity now lords over nature, but that we have reached such a scale that we have ourselves become part of nature’s force. Everything we do at scale, whatever its intention, results in unforeseen consequences we are then forced to react to and so on and so on in cycle that is now clearly inescapable. Our eternal incapacity to be self-sustaining is the surest sign that we are not God. As individuals we are inextricably entangled within societies with both entangled by nature herself. This is not a position from which either omniscience or omnipotence are in the offing.

Harari may have made his claims as a warning, giving himself the role of ironic prophet preaching not from a Levantine hillside but a California TED stage. Yet he is likely warning us about the wrong things. As we increasingly struggle with the problems generated by our entanglement, as we buckle as nature reacts, sometimes violently, to the scale of our assaults and torque, as we confront a world in which individuals and cultures are wound ever more tightly, and uncomfortably, together we might become tempted to look for saviors. One might then read Homo Deus and falsely conclude the entities of Dataism should fill such a role, not because of their benevolence, but on account of their purported knowledge and power.