Miller, who defines singularity as “a period of time at which an increase in human or machine intelligence radically changes civilization,” doesn’t make broad predictions about what could happen when it occurs, noting that it may turn into a Utopia or it could destroy society (read more about his book, Singularity Rising, in this review by Humanity+). While everyone who subscribes to the idea of singularity sees it stemming from automation at some point in the future, Miller goes back to the principles described by Adam Smith in the 1700s to illustrate the part economics will play.
“There are individual self interests and there are companies trying to satisfy consumer need so they can gain profit. These are the economic forces identified by Adam Smith,” Miller says. “Simply put, if Google figures out a better way to determine what you want to find when you search with them, they're going to earn a higher profit. If a company can come up with robots that build things cheaper than people can, it will gain market share.”
As an example of the effects of economics on AI today, Miller points to the iPhone’s ongoing evolution, with faster processing chips as well as increased amounts of money consumers spend on newer electronic gadgets. It’s that hunger for the fastest electronics that provides the incentive to companies such as Intel to keep developing faster computer chips.
While it’s easy to say a given country could put a slowdown on the development of faster electronics, it’s a difficult proposition in the international arena, Miller says. For instance, if the U.S. decided to curb its technological development, then China would likely be more than happy to step in to fill the void.
“The international marketplace makes it hard for individual governments to slow down their high tech sector because they'll fall behind in technology to other countries,” explains Miller. “So, in some ways, Adam Smith's laws apply more now to our world than to his, because so much more of the world is connected to the global economy.”
Given that, he believes more resources should be allocated to AI safety. As we continue developing better and faster computers, Miller believes more funding should go to insure artificial intelligence is developed correctly. Even then, potential issues still exist.
“A lot of things can go wrong besides us developing unfriendly artificial intelligence. Friendly artificial intelligence can save us from those bad things, but going slow with AI and computers isn't necessarily the safe course,” Miller says. “It could be the plague or weaponized (sic) smallpox released by North Korea that kills us. It’s not even clear, if you're really cautious and want to take it safe, what you should do.”
Looking to the future, Miller notes that the potential effects of AI and automation on the labor market are still uncertain. While robots taking over the work humans once did will result in economic benefits for manufacturers, the assumption that automation will create more jobs and greater personal wealth hinges on a workforce that requires greater skill and intelligence than it once did. Miller, however, remains optimistic.
He believes that robots taking over human jobs will likely benefit society, giving people an opportunity do other things with their lives besides work. Barring a god-like AI development that could change the makeup of the human brain, the economic Utopia of machines taking over for humans means people could still contribute to the economy by creating whatever they wanted, but they wouldn’t have to do it to survive.
“I think if we get more technological growth, that will help most workers, especially in rich countries. What's likely to happen is that people will be able to make a significant contribution to the economy,” Miller said. “If we can get machines to do things cheaply for us, we could still do work, you still could do art and you could still build things if you wanted to, but you don't have to. That would be a great outcome.”