Why the Singularity is not Coming any Time Soon
piero scaruffi
2013-02-11 00:00:00

Sociological Background



Historians, scientists and poets alike have written that the human being strives for the infinite. In the old days this meant that it strives to become one with the god who created and rules the world. As atheism began to make strides, Schopenhauer rephrased the concept as a "will to power". Nietzsche confirmed that god is dead, and the search for "infinite" became a mathematical and scientific program instead of a mystical one. Russell, Hilbert and others started a logical program that basically aimed at making it easy to prove and discover everything that can be. The perspective therefore changed: instead of something that humans have to attain, the infinite is something that humans will build. Then came the Singularity, a concept popularized by Ray Kurzweil's "The Age of Intelligent Machines" (1990) and his subsequent, highly-successful public-relationship campaign. The idea is that we are about to witness the advent of machines who are more intelligent than humans, so intelligent that humans can neither control them nor understand them.

Artificial Intelligence, that had largely languished, staged a revival of sorts, at least in the eyes of public opinion. Now every achievement in the field of A.I. is immediately devoured by the mainstream media and hailed as a step towards machine domination. In the age that has seen the end of the Space Race, the retirement of the Concorde, the decline of nuclear power, and the commercialization of the Internet (that basically turned a powerful scientific tool into little more than a marketing tool and a form of light entertainment), machine intelligence seems to bring some kind of collective reassurance that we are not, after all, entering a new Dark Age. On the contrary, we are witnessing the dawn of a superhuman era. Of course, decades of science fiction books and movies helped create the ideal audience for this kind of scenario.



A Brief History of Artificial Intelligence



To me a history of machine intelligence begins with Alan Turing's "universal machine" (originally conceived in 1936). He never built it, but Turing realized that one could create the perfect mathematician by simulating the way logical problems are solved: by manipulating symbols. The first computers were not Universal Turing Machines, but every computer built since the ENIAC (1946), including all the laptops and smartphones that are available in 2013, is a UTM. Because it was founded on predicate logic, which only admits two values ("true" and "false"), the computer at the heart of any "intelligent" machine relies on binary logic (zeros and ones).

Norbert Wiener's Cybernetics (1948) did much to show the relationship between machines and (non-intelligent) living organisms. Since then the "Turing Test", introduced by Alan Turing in "Computing Machinery and Intelligence" (1950), has been considered the kind of validation that a machine has to pass in order to be considered "intelligent": if a human observer, asking all sorts of questions, cannot tell whether the agent providing the answers is human or mechanical, then the machine has become intelligent (or, better, as intelligent as the human being). The practitioners of Artificial Intelligence quickly split in two fields. One, pioneered by Allen Newell and Herbert Simon with their "Logic Theorist" (1956), basically understood intelligence as the pinnacle of mathematical logic. The first breakthrough in this branch of A.I. was probably John McCarthy's article "Programs with Common Sense" (1959): McCarthy understood that some day machines would be better than humans at many repetitive and computational tasks, but "common sense" is what really makes someone "intelligent" and that comes from knowledge of the world. That article spawned the discipline of "knowledge representation": how can a machine learn the world and use that knowledge. This approach was somehow "justified" by the idea introduced by Noam Chomsky in "Syntactic Structures" (1957) that language competence is due to some grammatical rules that express which sentences are correct in a language. This line of research led to "knowledge-based systems" (or "expert systems"), such as Ed Feigenbaum's Dendral (1965), that consisted of an inference engine (the repertory of legitimate reasoning techniques) and a knowledge base (the "common sense" knowledge). This technology relied on acquiring knowledge from domain experts in order to create "clones" of such experts (machines that performed as well as the human experts). The limitation of expert systems was that they were "intelligent" only in one specific domain.

Meanwhile, the other branch of Artificial Intelligence was pursuing a completely different approach: simulate what the brain does. Since neuroscience was just at the beginning (machines to study living brains became available only in the 1970s), computer scientists only knew that the brain consists of a huge number of interconnected neurons, and neuroscientists were becoming ever more convinced that "intelligence" was due to the connections, not to the single neurons. The "strength" of the connections can be tweaked to cause different outputs for the same inputs: the problem consists in finding the correct "strengths" for the connections of the networks that will cause the network as a whole to come up with the correct interpretation of the input (e.g. "apple" when the image of an apple is presented). Frank Rosenblatt's Perceptron (1957) and Oliver Selfridge's Pandemonium (1958) pioneered "neural networks": not knowledge representation and logical inference, but pattern propagation and learning.

In 1969 Marvin Minsky and Samuel Papert published a devastating critique of neural networks (titled "Perceptrons") that virtually killed the discipline. At the same time expert systems were beginning to make inroads at least in academia, notably Bruce Buchanan's Mycin (1972) and John McDermott's Xcon (1980), and, by the 1980s, also in the industrial and financial worlds at large, thanks especially to many innovations in knowledge representation (Ross Quillian's semantic networks, Minsky's frames, Roger Schank's scripts, Barbara Hayes-Roth's blackboards, etc). Intellicorp, the first major start-up for Artificial Intelligence, was founded in Silicon Valley in 1980.



Footnotes in the History of Artificial Intelligence



There were many side tracks that didn't become as popular as expert systems and neural networks. Stanford Research Institute's robot Shakey (1969) was the vanguarde of autonomous vehicles. IBM's "Shoebox" debuted speech recognition (1964). Conversational agents such as Joe Weizenbaum's Eliza (1966) and Terry Winograd's Shrdlu (1972) were the first practical implementations of natural language processing. In 1968 Peter Toma founded Systran to commercialize machine-translation systems. John Holland introduced a different way to construct programs by introducing genetic algorithms (1975), the software equivalent of the rules used by biological evolution: instead of writing a program to solve a problem, let a population of programs evolve (according to those genetic algorithms) to become more and more "fit" (capable of finding solutions to that problem). Cordell Green experimented with automatic programming (1979), software that can write software the same way a software engineer does. in 1990 Carver Mead described a neuromorphic processor, a processor that emulates the human brain.



The Body



A lot of what books on machine intelligence say is based on a brain-centered view of the human being. I may agree that my brain is the most important organ of my body (i'm ok with transplanting just about any organ of my body but not my brain). However, this is not what evolution had in mind. The brain is one of the many organs designed to keep the body alive so that the body can find a mate and make children. The brain is not the goal but one of the tools to achieve that goal. (Incidentally, i always remind people, especially when the discussion is about "progress" and "immortality", that the longest-living beings have no brain, trees and bacteria).

Focusing only on mental activities when comparing humans and machines is a categorical mistake. Humans do have a brain but don't belong to the category of brains: they belong to the category of animals, which are mainly recognizable by their bodies. Therefore, one should compare machines and humans based on bodily actions and not just of printouts, screenshots and files. Of all the things that i do during a day (from running to reading a book) what can a machine do? what will a machine be able to do in ten years? in 20 years? in 200 years? I suspect we are very far from the day that a machine can simply play soccer in any meaningful way with six-year old children, let alone with champions. Playing a match of chess with the world champion of chess is actually easy. It is much harder for a machine to do any of the things that we routinely do in our home.

Furthermore, there's the meaning of action. The children who play soccer actually enjoy it. They scream, they are competitive, they cry if they lose, they can be mean, they can be violent. There is passion in what we do. Will an android that plays decent soccer in 3450 (that's a realistic date in my opinion) also have all of that? Let's take something simpler, that might happen in 50 or 100 years: at some point we'll have machines capable of reading a novel; but will they understand what they are reading? Is it the same "reading" that i do? This is not only a question about the self-awareness of the machine but about what the machine will do with the text it reads. I can find analogies with other texts, be inspired to write something myself, send the text to a friend, file it in a category that interests me. There is a follow-up to it. Machines that read a text and simply produce an abstract representation of its content (and we are very far from the day when they will be able to do so) are useful only for the human who will use it. The same applies to all the corporeal activities that are more than simple movements of limbs.

The body is the reason why i think the Turing Test is not very meaningful. The Turing Test locks a computer and a human being in two rooms, and, by doing so, it removes the body from the test. My test (let's immodestly call it the Scaruffi Test) would be different: we give a soccer ball to both the robot and the human and see who dribbles better. I am not terribly impressed that a computer beat the world champion of chess. I will be impressed the day a robot dribbles better than Messi. If you remove the body from the test, you are removing pretty much everything that defines a human being as a human being. A brain kept in a jar is not a human being: it is a gruesome tool for classrooms of anatomy.



The State of the Art in Artificial Intelligence



Knowledge-based systems did not expand as expected: the human experts were not terribly excited at the idea of constructing clones of themselves, and the clones were not terribly reliable in any case.

However, in the 1980s some conceptual breakthroughs fueled progress in robotics. Valentino Breitenberg, in his "Vehicles" (1984), showed that no intelligence is required for producing intelligent behavior: all is needed is a set of sensors and actuators. As the complexity of the "vehicle" increases, it seems to display an increasingly intelligent behavior. Starting in about 1987, Rodney Brooks began to design robots that use little or no representation of the world. One can know nothing, and have absolutely no common sense, but still be able to do interesting things if equipped with the appropriate set of sensors and actuators.

The 1980s also witnessed a progressive rehabiliation of neural networks, a process that turned exponential in the 2000s. The discipline was rescued in 1982 by John Hopfield, who described a new generation of neural networks, based on the simulation of the physical process of annealing: these neural networks were immune to the Minsky critique. In 1985 Geoffrey Hinton and Terry Sejnowski developed the Boltzmann machine, a software technique for learning networks, and in 1986 Paul Smolensky introduced a further optimization, the Restrictive Boltzmann Machine. These were carefully calibrated mathematical algorithms to build neural networks that were both feasible (given the dramatic processing requirements of neural network computation) and plausible (that solved the problem correctly). The field did not explode until 2006, when Geoffrey Hinton developed Deep Belief Networks, a fast learning algorithm for restricted Boltzmann machines. What had truly changed between the 1980s and the 2000s is the speed (and the price) of computers. Hinton's algorithms worked wonders when used on thousands of parallel processors. That's when the media started publicizing all sorts of machine learning feats. The truth, however, is that there has been no conceptual breakthrough since the 1980s.

The story of robots is similar. Collapsing prices and increased speeds have enabled a generation of robots based on very old technology (in some cases really old technology, like the gyroscope, invented 150 years earlier). Cynthia Breazeal's emotional robot "Kismet" (2000), Ipke Wachsmuth's conversational agent "Max" (2004), Honda's humanoid robot "Asimo" (2005), Osamu Hasegawa's robot that learned functions it was not programmed to do (2011) and Rodney Brooks' hand programmable robot "Baxter" (2012) sound good on paper but look still as primitive as Shakey in person. Manufacturing plants have certainly progressed dramatically and can build, at a fraction of the cost, what used to be unfeasible; but there has been no conceptual breakthrough. All the robots built today could have been built in 1980 if the same manufacturing techniques had been available. There is nothing conceptually new. What is truly new is the techniques of advanced manufacturing and the speed of computers.

The few projects that would constitute a real breakthrough have a long time to go. For example, in 2008 Dharmendra Modha launched a project to build a neuromorphic processor.

Despite all the hoopla, to me machines are still way less "intelligent" than a chimp. Recent experiments with neural networks were hailed as incredible triumphs by computer scientists because a computer finally managed to recognize a cat (at least a few times) after being presented thousands of images of cats. How long does it take a chimp to learn how a cat looks like? And that's despite the fact that computers use the fastest possible communication technology, whereas the neurons of a chimp's brain use hopelessly old-fashioned chemical signaling. One of the very first applications of neural networks was to recognize numbers. Sixty years later the ATM of my bank still cannot recognize the amounts on many of the cheques that i deposit. "Machines will be capable, within twenty years, of doing any work that a man can do" (Herbert Simon, 1965). Slightly optimistic back then. I haven't seen anything yet that makes me think that statement (by a Nobel Prize winner) is any more accurate today.

It is interesting how different generations react to the stupidity of machines: the generation that grew up without machines around gets extremely upset (because they are so much more stupid than humans), my generation (that grew up with machines) gets somewhat upset (because they are not much smarter than they were when i was a kid), and the younger generations are progressively less upset, with the youngest ones simply taking for granted that customer support has to be what it is (lousy) and that many things (pretty much all the things that require common sense, expertise, and what we normally call "intelligence") are simply impossible.

What the new A.I. does is very simple: lots of number crunching. It is a smart way to manipulate large datasets for the purpose of classification. It was not enabled by a groundbreaking paradigm shift but simply by increased computing power: given the computers of 30 years ago, nobody would have tried to build something like Andrew Ng's cat-recognition experiment. (See my article Artificial Intelligence and Brute Force)

Most disheartening has been the insignificant progress in Natural Language Processing since 1970. We virtually abandoned the idea of having machines understand and speak our language and resorted to the opposite: make humans speak like machines (that's what you do when you talk on the phone with a machine that asks you for numbers and that's what you do when you talk to your smartphone's "assistant" according to the rules of that assistant). Machine Translation too has disappointed. Despite recurring investments in the field by major companies, your favorite online translation system succeeds only with the simplest sentences, just like Systrans in the 1970s.

In my opinion the "footnotes" were not just footnotes: they were colossal failures. They were all great ideas. In fact, they were probably the "right" idea: of course an intelligent machine must be capable of conversing in natural language; of course it must be able to walk around, look for food, etc; of course, it must be able to understand what people say (each person having a slightly different voice); of course, it must be capable of translating from one language to another one; of course, it would make more sense for software to "evolve" by itself than being written by someone (just like any form of intelligent life did); of course, we would expect an intelligent machine to be able to write software (and build other machines, like we do); of course, it would make sense to build a computer that is a replica of a human brain if what we expect is a performance identical to the performance of a human brain.

These ideas remained footnotes for a simple reason: Artificial Intelligence has been, so far, a colossal failure.



Don't be Fooled by the Robot



Human-looking automata that mimic human behavior have been built since ancient times and some of them could perform sophisticated movements. They were mechanical. Today we have electromechanical sophisticated toys that can do all sort of things. There is a (miniature) toy that looks like a robot riding a bicycle. Technically speaking, the whole toy is the "robot". Philosophically speaking, there is no robot riding a bicycle. The robot-like thing on top of the bicycle is redundant, it's there just for show: you can remove the android and put the same gears in the bicycle seat or in the bicycle pedals and the bike with no passenger would go around and balance itself the exact same way: the thing that rides the bicycle is not the thing on top of the bike (designed to trick the human eye) but the gear that can be anywhere on the bike. The toy is one piece: instead of one robot, you could put ten robots on top of each other, or no robot at all. Any modern toy store has toys that behave like robots doing some amazing thing (amazing for a robot, ordinary for a human). It doesn't require intelligence: just Japanese or Swiss engineering. This bike-riding toy never falls, even when it's not moving. It is designed with a gyroscope to always stand vertical. Or, better, it falls when it runs out of battery. That's very old technology. If that's what we mean by "intelligent machines", then they have been around for a long time. We even have a machine that flies in the sky using that technology (so much for "exponential progress"). Does that toy represent a quantum leap in intelligence? Of course, no. It is remotely controlled by a remote control just like a tv set. It never "learned" how to bike. It was designed to bike. And that's the only thing it can do. Ever. If you want it to do something else, you'll have to add more gears of a different kind, specialized in doing that other thing. Maybe it's possible (using existing technology or even very old mechanical technology) to build radio-controlled automata that have one million different gears to do every single thing that humans do and that all fit in a size comparable to my body's size. Congratulations to the engineer. It would still not be me. And the only thing that is truly amazing in these toys is the miniaturization, not the "intelligence". A human is NOT a toy (yet).



The Singularity as Exponential Progress



Nonetheless, the Singularity gurus are driven to enthusiastic pronostications. There is an obvious disconnect between the state of the art and what the Singularity crowd predicts. We are not even remotely close to a machine that can troubleshoot (and fix!) an electrical outage or simply your washing machine, let alone a software bug. We are not even remotely close to a machine that can operate any of today's complex systems without human supervision. One of the premises of the theory of the Singularity is that machines will build other, smarter machines by themselves; but right now we don't even have software that can write other software. The jobs that have been automated are repetitive and trivial. And in most cases the automation of those jobs has required the user/customer to accept a lower (not higher) level of service. Witness how customer support is rapidly being reduced to a "good luck with your product" kind of service. The more automation around you, the more you (you) are forced to behave like a machine to interact with them, precisely because they are still so dumb. The reason that we have a lot of automation is that (in expensive countries like the USA and the European countries) it saves money: machines are cheaper than humans. Wherever the opposite is true, there are no machines. The reason we are moving to online education is not that university professors failed to educate their students but that universities are too expensive. And so forth: in most cases it's the business plan, not the intelligence of machines, that brings us automation.

Their predictions are based on the exponential progress in the speed and miniaturization of computers. Look close and there is little in what they say that has to do with software. It is mostly a hardware argument. And that is not surprising: predictions about the future of computers have been astronomically wrong in both direction but, in general, the ones that were too conservative were about hardware, the ones that were too optimistic were about software. What is amazing about today's smartphones is not that they can do what computers of the 1960s could not do (they can do pretty much the same things) but that they are small, cheap and fast. The fact that there are many more software applications downloadable for a few cents means that many more people can use them, a fact that has huge sociological consequences; but it does not mean that a conceptual breakthrough has been reached in software technology. It is hard to name one software program that exists today and could not have been written in Fortran 50 years ago. If it wasn't written, it's because it would have been too expensive or because the required hardware did not exist yet.

There has certainly been lots of progress in computers getting faster, smaller and cheaper; and this might continue for a while. Even assuming that this will continue "exponentially" (as the Singularity crowd is quick to claim), the argument that this kind of (hardware) progress is enough to make a shocking difference is based on an indirect assumption: that faster/smaller/cheaper will lead first to a human-level intelligence and then to a superior intelligence. After all, if you join together many many many dumb neurons you get the very intelligent brain of Einstein. If one puts together millions of smartphones, maybe one gets superhuman intelligence. Maybe.

I certainly share the concern that the complexity of a mostly automated world could get out of hand, something that has nothing to do with the degree of intelligence but just with the difficulty of managing complex systems. Complex systems that are difficult to manage have always existed, for example cities, armies, post offices, subways, airports, sewers...

Does driving a car qualify as a component of "intelligence"? Maybe it does, but it has to be "really" what it means for humans. There is no car that has driven even one meter without help from humans. The real world is a world in which first you open the garage door, then you stop to pick up the newspaper, then you enter the street but you will stop if you see a pedestrian waiting to cross the street. No car has achieved this skill yet. They self-drive only in highly favorable conditions on well marked roads with well marked lanes. And i will let you imagine what happens if the battery dies or there's a software bug... So even if driving a car qualified as an "intelligent" skill, machines have not achieved that skill yet.

As far as i know, none of the people who make a living on Singularity Science have published a roadmap of predictions (when will machines be capable of just recognizing a written number with our accuracy? when will machines be able to cross a street without being run over by a car? etc).



The Feasibility of Superhuman Intelligence



I feel that, in fact, the Singularity argument is mostly a philosophical (not scientific) argument.

What are the things that a superhuman intelligence can do and i cannot do? If the answer is "we cannot even conceive them", then we are back to the belief that angels exist and miracles happen, something that humans have always believed and eventually gave rise to organized religions.

The counter-argument is that mono-cellular organisms could not foresee the coming of multicellular organisms. Maybe. But bacteria are still around, and probably more numerous than any other form of life in our part of the universe. The forms of life that came after bacteria were perhaps inconceivable by bacteria but, precisely because they were on a different plane, they hardly interact. We kill bacteria when they harm us but we also rely on many of them to work for us (our body has more bacterial cells than human cells). In fact, one could claim that a superhuman intelligence already exists, and it's the planet as a whole, Gaia, of which we are just one of the many components.

David Deutsch in "The Beginning of Infinity" (2011) argues that there is nothing in our universe that the human mind cannot understand, as long as the universe is driven by universal laws. I tend to agree with Colin McGinn that there is a "cognitive closure" for any kind of brain, that any kind of brain can only do certain things, and that our cognitive closure will keep us from ever understanding some things about the world (perhaps the nature of consciousness is one of them); but in general i agree with Deutsch: if something can be expressed in formulas, then we will eventually "discover" it and "understand" it. So the only superhuman machine that would be too intelligent for humans to understand is a machine that does not obey the laws of nature, i.e. it is not a machine.

Intelligence is the capability to understand the universe, everything that exists in the universe. I am not sure what "superhuman" intelligence would be.

The definition of intelligence is vague, and it is vague because such a thing does not exist. We tend to call "intelligence" the whole repertory of human skills, from eating to theorizing. It would be better to break down human life into skills and that, for each skill, to assess how far we are from having machines that perform those skills. If "artificial intelligence" refers to only a subset of those skills, it would be nice to list them. I suspect that 100 researchers in Artificial Intelligence would come up with 100 different lists, and that 100 researchers in Singularity matter would come up with the most vague of lists.

The Singularity crowd also brushes off as irrelevant the old arguments against Artificial Intelligence. The debate has been going on for 50 years, and it has never been settled. In 1935 Church proved a theorem, basically an extension of Goedel's incompleteness theorem (1931) to computation, that first order logic is undecidable. Similarly, in 1936 Alan Turing proved that the "halting problem" is undecidable for Universal Turing Machines (it cannot be proven whether a computer will always find a solution to every problem). These are not details because these are theorems that humans can understand, and in fact humans discovered them and proved them, but machines cannot. Critics of Artificial Intelligence argue that machines will never become as intelligent of humans for the same reason that 1+1 is not 3 but 2.

Even brushing these logical arguments aside (which is like brushing aside the logical evidence that the Earth revolves around the Sun), one has to wonder for how long it will make sense to ask the question whether superhuman intelligence is possible. If the timeframe for intelligent machines is centuries and not decades like the optimists believe, then it's like asking an astronaut "Will at some point be possible to send a manned spaceship to Pluto?" Yes, it may be very possible, but it may never happen: not because it's impossible but simply because we may invent teleportation that will make spaceships irrelevant. Before we invent intelligent machines, synthetic biology or some other discipline might have invented something that will make robots and the likes irrelevant. The timeframe is not a detail either.

What would make me believe in the coming of some truly intelligent machine is a conceptual breakthrough in Artificial Intelligence. That might come as neuroscientists learn more about the brain; or perhaps a Turing-like mathematician will come up with some new way to build intelligent machines. But today we still use Turing Machines and binary logic, exactly like in 1946.

There is another line of attack against superhuman intelligence. The whole idea of the Singularity is based on a combination of the cognitive closure and the old fallacy of assuming that human progress (the progress of the human mind) ended with you. Even great philosophers like Hegel fell for that fallacy. If we haven't reached the cognitive closure yet, then there is no reason why human intelligence should stall. Today (2013) that perception is partially justified because progress has slowed down compared with a century ago. Anybody who has studied the history of Quantum Mechanics and Relativity has been amazed by the incredible (literally incredible) insight that those scientists had. The fallacy consists in believing that the human mind has reached a maximum of creativty and will never go any further. We build machines based on today's knowledge and creativity skills. Those machines will be able to do what we do today except for one thing: the next step of creativity that will make us think in different ways and invent things (not necessarily machines) of a different kind. For example, we (humans) are using machines to study synthetic biology, and progress in synthetic biology may make many machines disposable. Today's electronic machines may continue to exist and evolve, just like windmills existed and evolved and did a much better job than humans at what they were doing, and machines might even build other machines, but in the future they might be considered as intelligent as the windmills. Potentially, there is still a long way to go for human creativity. The Singularity crowd cannot imagine the future of human intelligence the same way that someone in 1904 could not imagine Relativity and Quantum Mechanics.

What worries me, as i have written in "Machine Intelligence and Human Stupidity (The Turing Test Revisited)", is precisely that: not machine intelligence but human intelligence. It is not machines that are becoming more intelligent (they are, but not by much), but humans that are becoming less intelligent. What is accelerating is the loss of human skills. Every tool deprives humans of the training they need to maintain a skill (whether arithmetics or orientation) and every interaction with machines requires humans to lower their intelligence to the intelligence of machines (e.g., press digits on a phone to request a service). This is an ongoing experiment on the human race that is likely to have a spectacular result: the first major regression in intelligence in the history of the species. The Turing Test can be achieved in two ways: 1. by making machines so intelligent that they will seem human; 2. by making humans so stupid that they will seem mechanical.

What will "singular" mean in a post-literate and post-arithmetic world?



Human Obsolescence



Many contemporary thinkers fear that us (humans) are becoming obsolete because machines will soon take our place. Irving John Good in "Speculations Concerning the First Ultraintelligent Machine" (1965): "the first ultraintelligent machine is the last invention that man need ever make". Hans Moravec in "Mind Children" (1988): "robots will eventually succeed us: humans clearly face extinction". Erik Brynjolfsson and Andrew McAfee have analyzed the problem in "Race Against the Machine" (2012). Actually, this idea has been repeated often since the invention of the assembly line and of the typewriter.

In order to understand what we are talking about we need to define what is "us". Assembly lines, typewriters, computers, search engines and whatever comes next have replaced jobs that have to do with material life. I could simply say that they have replaced "jobs". They have not replaced "people". They replaced their jobs. Therefore what went obsolete has been jobs, not people, and what is becoming obsolete is jobs, not people. Humans, to me, are biological organisms who (and not "that") write novels, compose music, make films, play soccer, ride the Tour de France, discover scientific theories, hike on mountains and recommend restaurants. Which of these activities are becoming obsolete because machines are doing them better? My favorite question in private conversations on machine intelligence is: when will a machine be able to cross a street that doesn't have a traffic light? Machines are not even remotely close to doing anything of what i consider "human". In fact, there has been virtually no progress in building a machine that will cross that street.

Machines are certainly good at processing big data at lightning speed. Fine. We are rapidly becoming obsolete at doing that. Soon we will have a generation that cannot do arithmetic. In fact, we've never done that. Very few humans spent their time analyzing big data. The vast majority of people are perfectly content with small data: the price of gasoline, the name of the president, the standings in the soccer league, the change in my pocket, the amount of my electricity bill, my address, etc. Humans have mostly been annoyed by big data. That was, in fact, a motivation to invent a machine that would take care of big data. The motivation to invent a machine that rides the Tour de France is minimal because we actually enjoy watching (human) riders sweat on those steep mountain roads, and many of us enjoy emulating them on the hills behind our home.

So we can agree that what is becoming obsolete is not "us" but our current jobs. That has been the case since the invention of the first farm (that made obsolete the prehistoric gatherers) and, in fact, since the invention of the wheel (that probably made obsolete many who were making a living carrying goods on their backs).



A Historical Parenthesis: Jobs in the 2000s



During the Great Depression of 2008-2011 people were looking for culprits to blame for the high rate of unemployment, and automation became a popular one (even by those who supported it). Automation is responsible for making many jobs obsolete in the 2000s, but it is not the only culprit.

The first and major one is the end of the Cold War. In 1991 the capitalist world started expanding: before 1991 the economies that really counted were a handful (USA, Japan, Western Europe). After 1991 the number of competitors for the industrialized countries has skyrocketed, and they are becoming better and better. Technology might have "stolen" some jobs, but that factor pales by comparison with the millions of jobs that were exported to Asia. In fact, if one considers the totality of the capitalist world, an incredible number of jobs have been created precisely during the period in which critics routinely claim that millions of jobs have been lost. If Kansas loses one thousand jobs but California creates two thousand, we consider it an increase in employment. These critics make the mistake of using the old nation-based logic for the globalized world. When counting jobs lost or created during the last twenty years, one needs to consider the entire interconnected economic system. In the first pages he mentions employment data for the USA but has nothing to say about employment over the same period in China, India, Mexico, Brazil, etc. Those now rank among the main trading partners of the USA, and, more importantly, business is multinational. If General Motors lays off one thousand employees in Michigan but hires two thousand in China, it is not correct to simply conclude that "one thousand jobs have been lost". If the car industry in the USA loses ten thousand jobs but the car industry in China gains twenty thousand, it is not correct to simply conclude that ten thousand jobs have been lost in the car industry. In all of these cases jobs have actually been created.

There are other factors that one has to keep in mind, although not as pivotal as globalization. For example, energy. This is the age of energy. Energy has always been important for economic activity but never like in this century. The cost and availability of energy are one of the main factors that determine growth rates and therefore employment. The higher the cost of energy, the lower the amount of goods that can be produced, the lower the number of people that we employ. If forecasts by international agencies are correct (See this recent news), the coming energy boom might have a bigger impact on employment in the USA than computing technology.

Then there are sociopolitical factors. Unemployment is high in Western Europe, especially among young people, not because of technology but because of rigid labor laws and government debt. A company that cannot lay off workers is reluctant to hire any. A government that is indebted cannot pump money into the economy.

Another major factor that accounts for massive losses of jobs in the developed world is the management science that emerged in the 1920s in the USA. That science (never mentioned in this book) is the main reason that today companies don't need as many employees as comparable companies employed a century ago. Each generation of companies has been "slimmer" than the previous generation. As those management techniques get codified and applied massively, companies become more efficient at manufacturing (across the world) and selling (using the most efficient channels) and at predicting business cycles. All of this results in fewer employees not because of automation but because of optimization.

Unemployment cannot be explained simply by looking at the effects of technology. Technology is one of many factors and, so far, not the main one. There have been periods of rapid technological progress that have actually resulted in very low unemployment (i.e. lots of jobs), most recently in the 1990s when e-commerce was introduced.

If one takes into account the real causes of the high unemployment rate in the USA and Europe, we reach different conclusions about the impacts that robots (automation in general) will have. In the USA robots are likely to bring back jobs. The whole point of exporting jobs to Asia was to benefit from the lower wages of Asian countries; but a robot that works for free 24 hours a day 7 days a week beats even the exploited workers of communist China. As they become more affordable, these "robots" (automation in general) will displace Chinese workers, not Michigan workers. The short-term impact will be to make outsourcing of manufacturing an obsolete concept. The large corporations that shifted thousands of jobs to Asia will bring them back. In the mid term, if this works out well, a secondary effect will be to put Chinese products out of the market and create a manufacturing boom in the USA: not only old jobs will come back but a lot of new jobs will be created. In the long term robots might create new kinds of jobs that today we cannot foresee. Not many people in 1946 realized that millions of software engineers would be required by the computer industry in 2012. My guess is that millions of "robot engineers" will be required in a heavily robotic future. Those engineers will not be as "smart" as their robots at whatever task for which those robots were designed just like today's software engineers are not as fast as the programs they design. And my guess is that robots will become obsolete too at some point, replaced by something else that today doesn't even have a name. Futurists have a unique way to completely miss the scientific revolutions that really matter. If i had to bet, i would bet that robots (intelligent machines in general) will become obsolete way before humans become obsolete.

I would be much more worried about the Gift Economy: the fact that millions of people are so eager to contribute content and services for free on the Internet. For example, the reason that journalists are losing their jobs has little to do with the automation in their departments and a lot to do with the millions of people who provide content for free on the Internet.



Semantics



In private conversations about "machine intelligence" i like to quip that it is not intelligent to talk about intelligent machines: whatever they do is not what we do, and therefore is neither "intelligent" nor "stupid" (attributes invented to define human behavior). Talking about the intelligence of a machine is like talking about the leaves of a person: trees have leaves, people don't. "Intelligence" and "stupidity" are not properties of machines: they are properties of humans. We apply to machines many words invented for humans simply because we don't have a vocabularity for the states of machines. For example, we buy "memory" for our computer, but that is not a memory at all: it doesn't remember (it simply stores) and it doesn't even forget, the two defining properties of memory. We call it memory for lack of a better word. We talk about the "speed" of a machine but it is not the "speed" at which a human being rides or drives. We don't have the vocabulary for machine behavior. We borrow words from the vocabulary of human behavior. It is a mistake to assume that, because we use the same word to name them, then they are the same thing. If i see a new kind of fruit and call it "apple" because there is no word in my language for it, it doesn't mean it is an apple. A computer does not "learn": what it does when it refines its data representation is something else (that we don't do). One of the fundamental states of human beings is "happiness". When is a machine "happy"? The question is meaningless: it's like asking when does a human being need to be watered? You water plants, not humans. Happiness is a meaningless word for machines. Of course, some day we may start using the word "happy" to mean for example, that the machine has achieved its goal or that it has enough electricity; but it would simply be a linguistic expedient. The fact that we may call it "happiness" does not mean that it "is" happiness. If you call me Peter because you can't spell my name, it does not mean that my name is Peter.



A Look at the Evidence: Accelerating (or Decelerating?) Progress



A postulate at the basis of many contemporary books by futurists and self-congratulating technologists is that we live in an age of unprecedented rapid change and progress. This is based on not having studied history in school. Look closer and our age won't look so unique anymore.

As i wrote in my essay titled "Regress":



One century ago in a relatively short time the world adopted the car, the airplane, the telephone, the radio and the record, while at the same time the visual arts went through Impressionism, Cubism and Expressionism, while at the same time Quantum Mechanics and Relativity happened in science. The years since World War II have witnessed a lot of innovation, but most of it has been gradual and incremental. We still drive cars and make phone calls. Cars still have four wheels and planes still have two wings. We still listen to the radio and watch television. While the Computer and Genetics have introduced powerful new concepts, and computers have certainly changed lifestyles, i wonder if any of these "changes" compare with the notion of humans flying in the sky and of humans located in different cities talking to each other.... There has been rapid and dramatic change before.


Then one should discuss "change" versus "progress". Change for the sake of change is not necessarily "progress" (most changes in my software applications have negative, not positive effects, and we all know what it means when our bank announces "changes" in policies). If i randomly change all the cells in your body, i may boast of "very rapid and dramatic change" but not necessarily of "very rapid progress". Assuming that any change equates with progress is not only optimism: it's the recipe for ending up with exactly the opposite of progress.

Ray Kurzweil has been popularizing the idea that exponential growth is leading towards the "singularity" The expression "exponential growth" is often used to describe our age. Trouble is: it has been used to describe just about every age since the invention of exponentials. In every age, there are always some things that grow exponentially, but others don't. For every technological innovation there was a moment when it spread "exponentially", whether it was church clocks or windmills, reading glasses or steam engines; and their "quality" improved exponentially for a while, until the industry matured or a new technology took over. Murphy's law (that translates into the doubling of processing power every 18 months) is nothing special: similar laws can be found for many of the old inventions. Think how quickly radio receives spread. In the USA there were only five radio stations in 1921 but already 525 in 1923. Cars? The USA produced 11,200 in 1903, but already 1.5 million in 1916. By 1917 a whopping 40% of households had a telephone in the USA up from 5% in 1900. There were fewer than one million subscribers to cable television in 1984, but more than 50 million by 1989. The Wright brothers flew the first plane in 1903. During World War I (1915-18) France built 67987 planes, Britain 58144, Germany 48537, Italy 20000 and the USA 15000, for a grand total of almost 200 thousand planes. After just 15 years of its invention. I am sure that similar statistics can be found for old inventions, all the way back to the invention of writing. Perhaps each of those ages thought that growth in those fields would continue at the same pace forever. The wisest, though, must have foreseen that eventually growth starts declining in every field. In a sense Kurzweil claims that computing is the one field in which growth will never slow down, in fact it will keep accelerating.

Again, i would argue that it is not so much "intelligence" that has accelerated in machines (their intelligence is the same that Alan Turing gave them when he invented his "universal machine") but miniaturization. In fact, Moore's law has nothing to do with machine intelligence, but simply with how many transistors one can squeeze on a tiny integrated circuit. There is very little that machines can do today that they could not have done in 1950 when Turing published his paper on the "intelligence test". What has truly changed is that today we have extremely powerful computers squeezed into a palm-size smartphone at a fraction of the cost. That's miniaturization. Equating miniaturization to intelligence is like equating an improved wallet to wealth.

Kurzweil used a diagram titled "Exponential Growth in Computing" over a century, but that is bogus because it starts with the electromechanical tabulators of a century ago: it is like comparing the power of a windmill with the power of a horse. Sure there is an exponential increase in power, but it doesn't mean that windmills will keep improving by the difference between horsepower and windpower.

Predictions about future exponential trends have almost always been wrong. Remember the prediction that the world's population would "grow exponentially"? Now we are beginning to fear that it will actually start shrinking (it already is in Japan and Italy). Or the prediction that energy consumption in the West will grow exponentially? It has peaked a decade ago. As a percentage of GDP, it is actually declining rapidly. Life expectancy? It rose rapidly in the West between 1900 and 1980 but since then it has barely moved. War casualties were supposed to grow exponentially with the invention of nuclear weapons: since the invention of nuclear weapons the world has experienced the lowest number of casualties ever. Places like Europe that had been at war for 1500 year have not had a major war in 60 years. Digital devices have spread dramatically over the last decade, but so did cars at some point: in 1900 no household in the USA owned a car, in 1930 one in two did, but in 2012 the density of cars is not increasing anymore.



Marketing and Fashion



What is truly accelerating at exponential speed is fashion. This is another point where many futurists and high-tech bloggers confuse a sociopolitical event with a technological event. We live in the age of marketing. If we did not invent anything, absolutely anything, there would still be hectic change. Change is driven by marketing. The industry desperately needs consumer to go out and keep buying newer models of old products or new products. Therefore we buy things we don't need. The younger generation is always more likely to be duped by marketing and soon the older generations find themselves unable to communicate with young people unless they too buy the same things. Sure: many of them are convenient and soon come to be perceived as "necessities"; but the truth is that humans have lived well (sometimes better) for millennia without those "necessities". The idea that an mp3 file is better than a compact disc which is better than a record is just that: an idea, and mainly a marketing idea. The idea that a streamed movie is better than a DVD which is better than a VHS tape is just that: an idea, and mainly a marketing idea. Steve Jobs was not necessarily a master of technological innovation (it is debatable whether he ever invented anything) but he was certainly a master of marketing new products to the masses. What is truly accelerating is the ability of marketing strategies to create the need for new products. Therefore, yes, our world is changing more rapidly than ever; not because we are surrounded by better machines but because we are surrounded by better snake-oil peddlers (and dumber consumers).



The Accelerating Evolution of Machines



In all cases of rapid progress in the functionalities of a machine it is tempting to say that the machine achieved in a few years what took humans millions of years of evolution to achieve. However, any human-made technology is indirectly using the millions of years of evolution that it took to evolve its creator. No human being, no machine. Therefore it is incorrect to claim that the machine came out of nowhere: it came out of millions of years of evolution, just like my nose. The machine that is now so much better than previous models of a few years ago did NOT evolve: WE evolved it (and continue to evolve it). There is no machine that has created another machine that is superior. WE create a better machine. We are capable of doing that because those millions of years of evolution equipped us with some skills (that the machine does NOT have). If humans gets extinct tomorrow morning, the evolution of machines ends. Right now this is true of all technologies. If all humans die, all technologies die with us (until a new form of intelligent life arises from millions of years of evolution and starts rebuilding all those watches, bikes, coffemakers, airplanes and computers). Hence, technically speaking, there has been no evolution of technology. This is yet another case in which we are applying an attribute invented for one category of things to a different category: the category of living beings evolve, the category of machines does something else, which we call "evolve" by recycling a word that actually has a different meaning. It would be more appropriate to say that a technology "has been evolved" rather than "evolved": computes have been evolved rapidly (by humans) since their invention. Technologies don't evolve (as of today): we make them evolve. The day we have machines that survive without human intervention and that build other machines without human intervention, we can apply the word "evolve" to those machines. As far as i know those machines don't exist yet, which means that there has been zero evolution in machines so far (using the word "evolution" in its correct meaning). Humans can build and use very complex machines. The machine is not intelligent, the engineer that designed it is. That engineer is the product of millions of years of evolution, the machine is a by-product of that engineer's millions of years of evolution.



Intermezzo: We may Overestimate Intelligence



Never forget that the longest living beings on the planet (bacteria and trees) have no brain.



Conclusion: Sociology Again



Humans have been expecting a supernatural event of some kind or another since prehistory. Millions of people are still convinced that Jesus will be coming back soon, and millions believe that the Mahdi will too. The Singularity risks becoming the new religion for the largely atheistic crowd of the high-tech world. Just like with Christianity and Islam, the eschatological issue/mission then becomes how to save oneself from damnation when the Singularity comes, balanced by the faith in some kind of resurrection. We've seen this movie before, haven't we?



Teaser: Machine Ethics



If we ever create a machine that is a fully-functioning brain totally equivalent to a human brain, will it be ethical to experiment on it? Will it be ethical to program it?



Readings on the Singularity



Most of what i read on the Singularity is either trivial (written by people who obviously know very little about the history of technology, Artificial Intelligence, etc) or highly unscientific (pure speculation that is as good as any science-fiction novel). Here are some readings that i would recommend: