The impact of technological progress on jobs has been the topic of countless books: most of them are forgotten because they were so wrong about it. Predicting the future has always been a lucrative business (Delphi’s Oracle, Nostradamus, George Orwell), but rarely a science. If all of them had been right, today we would all be unemployed and, in fact, extinct. Instead, guess what: humans are wealthier than ever in history, the world has never been so peaceful and we all buy machines by the millions. Pistono’s book is the refreshing exception: no, we are not doomed. That, per se, is a good reason to read it.
The book's breadth is impressive: its chapters touch on Economics, Sociology, Philosophy, Morality and Artificial Intelligence, and sometimes within the same paragraph. It is organized as a collection of self-standing essays, even though each one ends by implying a segue in the next one. But one can read pretty much any chapter independently and still get the point. In fact, my first criticism would be that the flow is not quite there: i get the premises, but not quite the "proof" of the theorem. It works better as a set of disconnected meditations on the present and the future. In this sense, i feel that it is an unfinished book, that the author had thoughts on so many subjects that just selecting the ones to include in the book took most of the design and a future edition will take care of working out a more comprehensive framework around them. In particular, the book ends up being two books in one. The first book is about the impact of today's computing technology, and that's the one i was mostly interested in, and therefore most of this review focuses on the first half. The second book is a book on ethics in such a high-tech world, and i am neither competent nor terribly interested in discussing it: when religions withdraw, people can argue forever about what is right and what is wrong, and each of the book's recommendations for a better life is based on good intentions, not necessarily on science (at least, the science is not explained). I'll trust the good intentions for now.
The first part of the book touches on so many of the topics that i routinely discuss these days. I can't resist hijacking this review and summarizing my thoughts on some of the most popular theses.
The book's premise is that us (humans) are becoming obsolete because machines will soon take our place. This refrain is repeated often in the media; actually, it has been repeated often since the invention of the assembly line and of the typewriter.
In order to understand what we are talking about we need to define what is "us". Assembly lines, typewriters, computers, search engines and whatever comes next have replaced jobs that have to do with material life. I could simply say that they have replaced "jobs". They have not replaced "people". They replaced their jobs. Therefore what went obsolete has been jobs, not people, and what is becoming obsolete is jobs, not people. Humans, to me, are biological organisms who (and not "that") write novels, compose music, make films, play soccer, ride the Tour de France, discover scientific theories, hike on mountains and recommend restaurants. Which of these activities are becoming obsolete because machines are doing them better? My favorite question in private conversations on machine intelligence is: when will a machine be able to cross a street that doesn't have a traffic light? Machines are not even remotely close to doing anything of what i consider "human". In fact, there has been virtually no progress in building a machine that will cross that street.
Machines are certainly good at processing big data at lightning speed. Fine. We are rapidly becoming obsolete at doing that. Soon we will have a generation that cannot do arithmetic. In fact, we've never done that. Very few humans spent their time analyzing big data. The vast majority of people are perfectly content with small data: the price of gasoline, the name of the president, the standings in the soccer league, the change in my pocket, the amount of my electricity bill, my address, etc. Humans have mostly been annoyed by big data. That was, in fact, a motivation to invent a machine that would take care of big data. The motivation to invent a machine that rides the Tour de France is minimal because we actually enjoy watching (human) riders sweat on those steep mountain roads, and many of us enjoy emulating them on the hills behind our home.
So we can agree that what is becoming obsolete is not "us" but our current jobs. That has been the case since the invention of the first farm (that made obsolete the prehistoric gatherers) and, in fact, since the invention of the wheel (that probably made obsolete many who were making a living carrying goods on their backs).
And this offers me a good segue to the topic of jobs.
The economic analysis at the foundation of this book is clearly superficial, and the author recognizes it at the outset. However, some macroscopic factors should have been mentioned in a book that opens with a discussion on chronic unemployment in our age.
The first and major one is the end of the Cold War. In 1991 the capitalist world started expanding: before 1991 the economies that really counted were a handful (USA, Japan, Western Europe). After 1991 the number of competitors for the industrialized countries has skyrocketed, and they are becoming better and better. Technology might have "stolen" some jobs, but that factor pales by comparison with the millions of jobs that were exported to Asia. In fact, if one considers the totality of the capitalist world, an incredible number of jobs have been created precisely during the period in which Pistono claims that millions of jobs have been lost. If Kansas loses one thousand jobs but California creates two thousand, we consider it an increase in employment. Pistono makes the mistake of using the old nation-based logic for the globalized world. When counting jobs lost or created during the last twenty years, one needs to consider the entire interconnected economic system. In the first pages he mentions employment data for the USA but has nothing to say about employment over the same period in China, India, Mexico, Brazil, etc. Those now rank among the main trading partners of the USA, and, more importantly, business is multinational. If General Motors lays off one thousand employees in Michigan but hires two thousand in China, it is not correct to simply conclude that "one thousand jobs have been lost". If the car industry in the USA loses ten thousand jobs but the car industry in China gains twenty thousand, it is not correct to simply conclude that ten thousand jobs have been lost in the car industry. In all of these cases jobs have actually been created.
There are other factors that one has to keep in mind, although not as pivotal as globalization. For example, energy. This is the age of energy. Energy has always been important for economic activity but never like in this century. The cost and availability of energy are one of the main factors that determine growth rates and therefore employment. The higher the cost of energy, the lower the amount of goods that can be produced, the lower the number of people that we employ. If forecasts by international agencies are correct (See this recent news), the coming energy boom might have a bigger impact on employment in the USA than computing technology.
Then there are sociopolitical factors. Unemployment is high in Western Europe, especially among young people, not because of technology but because of rigid labor laws and government debt. A company that cannot lay off workers is reluctant to hire any. A government that is indebted cannot pump money into the economy.
Another major factor that accounts for massive losses of jobs in the developed world is the management science that emerged in the 1920s in the USA. That science (never mentioned in this book) is the main reason that today companies don't need as many employees as comparable companies employed a century ago. Each generation of companies has been "slimmer" than the previous generation. As those management techniques get codified and applied massively, companies become more efficient at manufacturing (across the world) and selling (using the most efficient channels) and at predicting business cycles. All of this results in fewer employees not because of automation but because of optimization.
Unemployment cannot be explained simply by looking at the effects of technology. Technology is one of many factors and, so far, not the main one. There have been periods of rapid technological progress that have actually resulted in very low unemployment, most recently the 1990s when e-commerce introduced. Here and there i also found misunderstandings about how the modern age compares with previous ones. For example, Pistono quotes a book according to which "147 megacorporations controls 40% of the world". Aside for the vagueness of the statement (40% of what? GDP? land? people?), this is nothing new. At its peak Standard Oil was the dominant oil company worldwide (so much so that the government itself forced it to split). At its peak AT&T owned almost 100% of telephony in the USA and had revenues comparable to the GDP of many independent countries (again, the government forced it to split). At its peak RCA pretty owned the radio business. IBM ruled the world of computing in the 1960s. In fact, the 1920s were probably the age of monopolies. The statement about the "147 megacorporations" might or might not be correct: my point is that a historical perspective is always needed to understand if it's something new or if it's a case of "same old same old". Anyway, it is undeniable that automation has contributed dramatically to eliminate jobs, so let's look at "intelligent machines". But first a little digression on the brain's container: the body.
A lot of what books on machine intelligence say s based on a brain-centered view of the human being. I may agree that my brain is the most important organ of my body (i'm ok with transplanting just about any organ of my body but not my brain). However, this is not what evolution had in mind. The brain is one of the many organs designed to keep the body alive so that the body can find a mate and make children. The brain is not the goal but one of the tools to achieve that goal. (Incidentally, i always remind people, especially when the discussion is about "progress" and "immortality", that the longest-living beings have no brain, trees and bacteria).
Focusing only on mental activities when comparing humans and machines is a categorical mistake. Humans do have a brain but don't belong to the category of brains: they belong to the category of animals, which are mainly recognizable by their bodies. Therefore, one should compare machines and humans based on bodily actions and not just of printouts, screenshots and files. Of all the things that i do during a day (from running to reading a book) what can a machine do? what will a machine be able to do in ten years? in 20 years? in 200 years? I suspect we are very far from the day that a machine can simply play soccer in any meaningful way with six-year old children, let alone with champions. Playing a match of chess with the world champion of chess is actually easy. It is much easier to do any of the things that we routinely do in our home.
Furthermore, there's the meaning of action. The children who play soccer actually enjoy it. They scream, they are competitive, they cry if they lose, they can be mean, they can be violent. There is passion in what we do. Will an android that plays decent soccer in 3450 (that's a realistic date in my opinion) also have all of that? Let's take something simpler, that might happen in 50 or 100 years: at some point we'll have machines capable of reading a novel; but will they understand what they are reading? Is it the same "reading" that i do?
The body is the reason why i think the Turing Test is not very meaningful. The Turing Test locks a computer and a human being in two rooms, and, by doing so, it removes the body from the test. My test (let's immodestly call it the Scaruffi Test) would be different: we give a soccer ball to both the robot and the human and see who dribbles better. If you remove the body from the test, you are removing pretty much everything that defines a human being as a human being. A brain kept in a jar is not a human being: it is a gruesome tool for classrooms of anatomy.
Now we can talk about it means for a machine to be "intelligent".
In private conversations about "machine intelligence" i like to quip that it is not intelligent to talk about intelligent machines: whatever they do is not what we do, and therefore is neither "intelligent" nor "stupid" (attributes invented to define human behavior). Talking about the intelligence of a machine is like talking about the leaves of a person: trees have leaves, people don't. "Intelligence" and "stupidity" are not properties of machines: they are properties of humans. We apply to machines many words invented for humans simply because we don't have a vocabularity for the states of machines. For example, we buy "memory" for our computer, but that is not a memory at all: it doesn't remember (it simply stores) and it doesn't even forget, the two defining properties of memory. We call it memory for lack of a better word. We talk about the "speed" of a machine but it is not the "speed" at which a human being rides or drives. We don't have the vocabulary for machine behavior. We borrow words from the vocabulary of human behavior. It is a mistake to assume that, because we use the same word to name them, then they are the same thing. If i see a new kind of fruit and call it "apple" because there is no word in my language for it, it doesn't mean it is an apple. A computer does not "learn": what it does when it refines its data representation is something else (that we don't do). One of the fundamental states of human beings is "happiness". When is a machine "happy"? The question is meaningless: it's like asking when does a human being need to be watered? You water plants, not humans. Happiness is a meaningless word for machines. Of course, some day we may start using the word "happy" to mean for example, that the machine has achieved its goal or that it has enough electricity; but it would simply be a linguistic expedient. The fact that we may call it "happiness" does not mean that it "is" happiness. If you call me Peter because you can't spell my name, it does not mean that my name is Peter.
The objection to what i am saying is typically behavioristic in nature: who cares what a machine does and how it does it, let's just measure its performance and see if it matches human performance.
Alas, despite all the hoopla, to me machines are still way less "intelligent" than a six-month baby and even less intelligent than a chimp. Recent experiments with neural networks were hailed as incredible triumphs by computer scientists because a computer finally managed to recognize a cat (at least a few times) after being presented thousands of images of cats. How long does it take a toddler to learn how a cat looks like? And that's despite the fact that computers use the fastest possible communication technology, whereas the neurons of a toddler's brain use hopelessly old-fashioned chemical signaling. One of the very first applications of neural networks was to recognize numbers. Sixty years later the ATM of my bank still cannot recognize the amounts on 50% of the cheques that i deposit.
Incidentally, the modern "machine learning" techniques that Pistone inevitably mentions as revolutionizing the field are actually old. The mathematical foundations come from work by Geoffrey Hinton who has discovered algorithms to work efficiently with Restricted Boltzmann Machines and stack them one on top of the other. Boltzmann Machines originated from Hopfield's and Smolensky's work in the 1980s. So we are talking about a technology that is 30+ years old. What that technology does is very simple: lots of number crunching. It is a smart way to manipulate large datasets for the purpose of classification. It was not enabled by a groundbreaking paradigm shift but simply by increased computing power: given the computers of 30 years ago, nobody would have tried to build something like Andrew Ng's cat-recognition experiment. (See my article Artificial Intelligence and Brute Force) There has been very little progress in machine learning over the last 30 years. I fail to see the "accelerating progress".
Even if humans eventually become so dumb that they cannot ride a bike anymore, my question will remain the same: how long does it take for a computer to learn how to ride a bike? or, more realistically, how long will it take before we have computers that are at least capable of trying to learn how to ride a bike?
And that's a good segue to discuss progress.
A postulate at the basis of this book and of many other contemporary books (particularly those by futurists and self-congratulating technologists) is that we live in an age of unprecedented rapid change and progress. But is our age truly so unique?
One century ago in a relatively short time the world adopted the car, the airplane, the telephone, the radio and the record, while at the same time the visual arts went through Impressionism, Cubism and Expressionism, while at the same time Quantum Mechanics and Relativity happened in science. The years since World War II have witnessed a lot of innovation, but most of it has been gradual and incremental. We still drive cars and make phone calls. Cars still have four wheels and planes still have two wings. We still listen to the radio and watch television. While the Computer and Genetics have introduced powerful new concepts, and computers have certainly changed lifestyles, i wonder if any of these "changes" compare with the notion of humans flying in the sky and of humans located in different cities talking to each other.... There has been rapid and dramatic change before.
Then one should discuss "change" versus "progress". If i randomly change all the cells in your body, i may boast of "very rapid change" but not necessarily of "very rapid progress". Assuming that any change equates with progress is not only optimism: it's the recipe for ending up with exactly the opposite of progress.
Ray Kurzweil has been popularizing the idea that exponential growth is leading towards the "singularity" The expression "exponential growth" is often used to describe our age. Trouble is: it has been used to describe just about every age since the invention of exponentials. In every age, there are always some things that grow exponentially, but others don't. For every technological innovation there was a moment when it spread "exponentially", whether it was church clocks or windmills, reading glasses or steam engines; and their "quality" improved exponentially for a while, until the industry matured or a new technology took over. Murphy's law (that translates into the doubling of processing power every 18 months) is nothing special: similar laws can be found for many of the old inventions. Think how quickly radio receives spread. In the USA there were only five radio stations in 1921 but already 525 in 1923. Cars? The USA produced 11,200 in 1903, but already 1.5 million in 1916. By 1917 a whopping 40% of households had a telephone in the USA up from 5% in 1900. There were fewer than one million subscribers to cable television in 1984, but more than 50 million by 1989. The Wright brothers flew the first plane in 1903. During World War I (1915-18) France built 67987 planes, Britain 58144, Germany 48537, Italy 20000 and the USA 15000, for a grand total of almost 200 thousand planes. After just 15 years of its invention. I am sure that similar statistics can be found for old inventions, all the way back to the invention of writing. Perhaps each of those ages thought that growth in those fields would continue at the same pace forever. The wisest, though, must have foreseen that eventually growth starts declining in every field. In a sense Kurzweil claims that computing is the one field in which growth will never slow down, in fact it will keep accelerating. David Deutsch's "The Beginning of Infinity" (Viking, 2011) is a much more powerful defense of that thesis (see my review).
In my Alan Turing tribute of early 2012 (Machine Intelligence vs Human Stupidity: Are we building smarter machines or dumber humans?) i argued that it is not so much "intelligence" that has accelerated in machines (their intelligence is the same that Alan Turing gave them when he invented his "universal machine") but miniaturization. In fact, Moore's law has nothing to do with machine intelligence, but simply with how many transistors one can squeeze on a tiny integrated circuit. There is very little that machines can do today that they could not have done in 1950 when Turing published his paper on the "intelligence test". What has truly changed is that today we have extremely powerful computers squeezed into a palm-size smartphone at a fraction of the cost. That's miniaturization. Equating miniaturization to intelligence is like equating an improved wallet to wealth.
(Pistono reproduces a diagram by Kurzweil titled "Exponential Growth in Computing", but that is bogus because it starts with the electromechanical tabulators of a century ago: it is like comparing the power of a windmill with the power of a horse. Sure there is an exponential increase in power, but it doesn't mean that windmills will keep improving by the difference between horsepower and windpower.
Predictions about future exponential trends have almost always been wrong. Remember the prediction that the world's population would "grow exponentially"? Now we are beginning to fear that it will actually start shrinking (it already is in Japan and Italy). Or the prediction that energy consumption in the West will grow exponentially? It has peaked a decade ago. As a percentage of GDP, it is actually declining rapidly. Life expectancy? It rose rapidly in the West between 1900 and 1980 but since then it has barely moved. War casualties were supposed to grow exponentially with the invention of nuclear weapons: since the invention of nuclear weapons the world has experienced the lowest number of casualties ever (places like Europe that had been at war for 1500 year have not had a major war in 60 years). (For those who don't know, Kurzweil's "The Singularity Is Near" of 2005 is a revision of his 1999 book "The Age of Spiritual Machines" which was a revision of his 1990 book "The Age of Intelligent Machines", and that indeed is a kind of exponential trend).
What is truly accelerating at exponential speed in fashion. This is another point where many futurists confuse a sociopolitical event with a technological event. We live in the age of marketing. If we did not invent anything, absolutely anything, there would still be hectic change. Change is driven by marketing. The industry desperately needs consumer to go out and keep buying newer models of old products or new products. Therefore we buy things we don't need. The younger generation is always more likely to be duped by marketing and soon the older generations find themselves unable to communicate with young people unless they too buy the same things. Sure: many of them are convenient and soon come to be perceived as "necessities"; but the truth is that humans have lived well (sometimes better) for millennia without those "necessities". The idea that an mp3 file is better than a compact disc which is better than a record is just that: an idea, and mainly a marketing idea. The idea that a streamed movie is better than a DVD which is better than a VHS tape is just that: an idea, and mainly a marketing idea. Steve Jobs was not necessarily a master of technological innovation (it is debatable whether he ever invented anything) but he was certainly a master of marketing new products to the masses. What is truly accelerating is the ability of marketing strategies to create the need for new products. Therefore, yes, our world is changing more rapidly than ever; not because we are surrounded by better machines but because we are surrounded by better snake-oil peddlers (and dumber consumers).
Tip for Future Edition: You Are Your Tools
I have always said that "you are the people with whom you surround yourself" (meaning that the company you keep defines your aspirations, your context, your beliefs and even your daily behavior), but now i am increasingly convinced that, just like the spider "is" its cobweb, we are the tools that we use. Too much of my daily life depends on the tools that are available today. If i had lived a century ago, my life would have been completely different. There might be a piece of me that is there regardless of what i do, and there might be a piece of me that is influenced by parents, relatives, friends and society, but too many hours of every single day depend on the tools that are available to me. As technology changes, it would be more interesting to discuss how it impacts "me", not just my employment status.
The second half of the book is (probably inadvertently) more ambitious: it aims at sketching a morality for an age in which automation will make jobs obsolete. Pistono's line of attack is that automation will make jobs obsolete, therefore people unhappy, therefore we need to find new meaning.
This section of the book starts with a discussion about happiness. I can list many great thinkers of the past who had tackled that subject, and none of them was particularly convincing (so much so that philosophers still make a living writing about happiness). Using wealth as a measure of happiness has all sorts of obvious problems. The country with the highest suicide rate is now France, followed by Japan and Scandinavian countries, all very rich countries. Pistono deals in the relationship between work and happiness, and the negative impact that unemployment has on happiness. Again, many books have been written on the subject. One article that made an impact on me (Foucault?) pointed out that teenagers are perfectly happy to be unemployed; then we screw their lives by brainwashing them about having a career, and for the rest of their lives they will be happy only if they do have a career.
Another interesting topic is how "busy" people are. Jan English-Lueck has written books about the busy lives of Silicon Valley, which might be the best introduction to modern "busy" lives. My sense is that work deprives humans of meaning, and then people need to feel their lives with everything they can (from skydiving to salsa lessons) simply to 1) forget how meaningless their lives are and 2) see if they can accidentally stumble into a meaning of life. In places where people are very religious the motivation to be always so busy is not very high: meaning is in the afterlife.
Pistono's recommendation for a "better" life include education (not specified on which topics), growing your own food, eating less meat, using public transportation, etc. I am not competent to decide which of these are founded on sound science (i have been a vegetarian for almost 40 years on ideological grounds, but i have highly educated friends who swear that a meat-only diet is the healthiest and the most natural) and what effects they will have on employment (if we consume less, don't we cause more unemployment?) and on happiness (most of my highly educated friends are depressed, not happy, and two of them tried to commit suicide). Regardless, the attempt is heroic: basically, Pistono is trying to construct a future society in which humans will be happy even though they will be less necessary. Instead of an apocalyptic view of the future, Pistono is the rare prophet with a Panglossian view of the future.
Robots Will Steal Your Job, But That’s OK: how to survive the economic collapse and be happy [Paperback]
piero scaruffi is an author, cultural historian and blogger who has written extensively about a wealth of topics, ranging from cognitive science to music.
(2) Comments •
(5737) Hits •