Today, large streams of data, coupled with statistical analysis and sophisticated algorithms, are rapidly gaining importance in almost every field of science, politics, journalism, and much more. What does this mean for the future of work?
This, for those who have been paying attention to the exponential trends of technology and IT in general, should come as no surprise.
Many critics used to say that computers could never beat the best human at chess, because computers perform very inefficient brute force attacks on problems, instead of relying on intuition and hierarchical structures like our brains do. Yet, the 1997 Deep Blue versus Garry Kasparov challenge saw the IBM machine beat the World Chess Champion, during what has been called “the most spectacular chess event in history”.
History repeated itself in 2011, when IBM Watson defeated at the game of Jeopardy! Brad Rutter, the biggest all-time money winner (>$3.4 million), and Ken Jennings, the record holder for the longest championship streak (74 wins). Just before the match, more or less the same arguments that were brought up in 1997 were presented against the machine that was crunching some 200 million pages of text through sophisticated AI. Yet, the machine won again.
Recently, legendary linguist Noam Chomsky was interviewed on the development of Artificial Intelligence over the years. According to the MIT Professor, the heavy use of statistical methods and large corpora of data is unlikely to yield any significant scientific insights into the study of language, because you “can get a better and better approximation”, “but you learn nothing about the language”. That may be so. But Peter Norvig, Director of Research at Google, points out in his critical response that “grammaticality is not a categorical, deterministic judgment but rather an inherently probabilistic one. This becomes clear to anyone who spends time making observations of a corpus of actual sentences, but can remain unknown to those who think that the object of study is their own set of intuitions about grammaticality [...] it is observation, not intuition that is the dominant model for science.”
It’s difficult to say who is right on this (only time will tell), but it’s rather easy which approach had the biggest commercial success. 100% of search engines, speech recognition, machine translation, word sense disambiguation, and most of coreference resolution, part of speech tagging, and question answering algorithms are trained on large data sets and are probabilistic in nature.
Our intuitions and insights develop quite linearly with time, but the amount of data at our disposal and the computing power capable of interpreting such data is increasing exponentially. This had a profound effect on the workforce, and it will do more so in the future. Forbes already utilizes Narrative Science, an innovative technology company, to create rich narrative content from data. Google News aggregates millions news stories and clusters accurately them in a matter of seconds, a task that no group of humans could ever dream of performing. Facebook and Amazon’s suggestions algorithms are far too complex to be matched by any man or woman with “good intuition”. The list goes on and on.
It appears that whenever we think computers cannot outsmart humans at some task, it’s only a matter of time before we are proven wrong, again and again. How will this affect the labor force? What will happen to the economy in the future, in sight of these rapid changes ahead of us? The answer to these questions is not trivial, and probably nobody knows with certainty. It is my hope that we will soon start a conversation on this topic, which I think is of utmost importance, and that should be at the center of our public debate.
The future of the economy and society is very much uncertain. However, I think it will depend on us, on how we decide to use the prodigious technology that we are developing, and for what purpose. And to ensure that we take the right path, we must start a serious conversation on this issue, before it’s too late.