IEET > Rights > HealthLongevity > Economic > GlobalDemocracySecurity > Vision > Advisory Board > Daniel Faggella > Sociology > Philosophy > Psychology > Futurism > Technoprogressivism > Innovation > Artificial Intelligence > Eco-gov
Economics And The Future of Artificial Intelligence
Daniel Faggella   Dec 6, 2015   Ethical Technology  

Ask any technological expert, and he or she is certain to have their own variation as to the definition of “singularity.” However, no matter which definition of singularity you choose to go by, according to Author, Artificial Intelligence Researcher and Smith College Professor of Economics Dr. James D. Miller, economics will play a big role in its advent.

Miller, who defines singularity as “a period of time at which an increase in human or machine intelligence radically changes civilization,” doesn’t make broad predictions about what could happen when it occurs, noting that it may turn into a Utopia or it could destroy society (read more about his book, Singularity Rising, in this review by Humanity+). While everyone who subscribes to the idea of singularity sees it stemming from automation at some point in the future, Miller goes back to the principles described by Adam Smith in the 1700s to illustrate the part economics will play.

“There are individual self interests and there are companies trying to satisfy consumer need so they can gain profit. These are the economic forces identified by Adam Smith,” Miller says.  “Simply put, if Google figures out a better way to determine what you want to find when you search with them, they're going to earn a higher profit. If a company can come up with robots that build things cheaper than people can, it will gain market share.”

As an example of the effects of economics on AI today, Miller points to the iPhone’s ongoing evolution, with faster processing chips as well as increased amounts of money consumers spend on newer electronic gadgets. It’s that hunger for the fastest electronics that provides the incentive to companies such as Intel to keep developing faster computer chips.

While it’s easy to say a given country could put a slowdown on the development of faster electronics, it’s a difficult proposition in the international arena, Miller says. For instance, if the U.S. decided to curb its technological development, then China would likely be more than happy to step in to fill the void.

“The international marketplace makes it hard for individual governments to slow down their high tech sector because they'll fall behind in technology to other countries,” explains Miller. “So, in some ways, Adam Smith's laws apply more now to our world than to his, because so much more of the world is connected to the global economy.”

Given that, he believes more resources should be allocated to AI safety. As we continue developing better and faster computers, Miller believes more funding should go to insure artificial intelligence is developed correctly. Even then, potential issues still exist.

“A lot of things can go wrong besides us developing unfriendly artificial intelligence. Friendly artificial intelligence can save us from those bad things, but going slow with AI and computers isn't necessarily the safe course,” Miller says. “It could be the plague or weaponized (sic) smallpox released by North Korea that kills us. It’s not even clear, if you're really cautious and want to take it safe, what you should do.”

Looking to the future, Miller notes that the potential effects of AI and automation on the labor market are still uncertain. While robots taking over the work humans once did will result in economic benefits for manufacturers, the assumption that automation will create more jobs and greater personal wealth hinges on a workforce that requires greater skill and intelligence than it once did. Miller, however, remains optimistic.

He believes that robots taking over human jobs will likely benefit society, giving people an opportunity do other things with their lives besides work. Barring a god-like AI development that could change the makeup of the human brain, the economic Utopia of machines taking over for humans means people could still contribute to the economy by creating whatever they wanted, but they wouldn’t have to do it to survive.

“I think if we get more technological growth, that will help most workers, especially in rich countries. What's likely to happen is that people will be able to make a significant contribution to the economy,” Miller said. “If we can get machines to do things cheaply for us, we could still do work, you still could do art and you could still build things if you wanted to, but you don't have to. That would be a great outcome.”

Daniel Faggella is the founder of TechEmergence, and blogs at


As I have had occasion to point out many times before, usually falling upon deaf ears:
Most folk still seem unable to break free from the traditional science fiction based notions involving individual robots/computers/systems. Either as potential threats, beneficial aids or serious basis for “artificial intelligence”.
In actuality, the real next cognitive entity quietly self assembles in the background, mostly unrecognized for what it is. And, contrary to our usual conceits, is not stoppable or directly within our control.
We are very prone to anthropocentric distortions of objective reality. This is perhaps not surprising, for to instead adopt the evidence based viewpoint now afforded by “big science” and “big history” takes us way outside our perceptive comfort zone.
The fact is that the evolution of the Internet (and, of course, major components such as Google) is actually an autonomous process. The difficulty in convincing people of this “inconvenient truth” seems to stem partly from our natural anthropocentric mind-sets and also the traditional illusion that in some way we are in control of, and distinct from, nature. Contemplation of the observed realities tend to be relegated to the emotional “too hard” bin.
This evolution is not driven by any individual software company or team of researchers, but rather by the sum of many human requirements, whims and desires to which the current technologies react. Among the more significant motivators are such things as commerce, gaming, social interactions, education and sexual titillation.
Virtually all interests are catered for and, in toto provide the impetus for the continued evolution of the Internet. Netty is still in her larval stage, but we “workers” scurry round mindlessly engaged in her nurture.
By relinquishing our usual parochial approach to this issue in favor of the overall evolutionary “big picture” provided by many fields of science, the emergence of a new predominant cognitive entity (from the Internet, rather than individual machines) is seen to be not only feasible but inevitable.
The separate issue of whether it well be malignant, neutral or benign towards we snoutless apes is less certain, and this particular aspect I have explored elsewhere.
Stephen Hawking, for instance, is reported to have remarked “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,”
Such statements reflect the narrow-minded approach that is so common-place among those who make public comment on this issue. In reality, as much as it may offend our human conceits, the march of technology and its latest spearhead, the Internet is, and always has been, an autonomous process over which we have very little real control.
Seemingly unrelated disciplines such as geology, biology and “big history” actually have much to tell us about the machinery of nature (of which technology is necessarily a part) and the kind of outcome that is to be expected from the evolution of the Internet.
This much broader “systems analysis” approach, freed from the anthropocentric notions usually promoted by the cult of the “Singularity”, provides a more objective vision that is consistent with the pattern of autonomous evolution of technology that is so evident today.
Very real evidence indicates the rather imminent implementation of the next, (non-biological) phase of the on-going evolutionary “life” process from what we at present call the Internet. It is effectively evolving by a process of self-assembly.
The “Internet of Things” is proceeding apace and pervading all aspects of our lives. We are increasingly, in a sense, “enslaved” by our PCs, mobile phones, their apps and many other trappings of the increasingly cloudy net. We are already largely dependent upon it for our commerce and industry and there is no turning back. What we perceive as a tool is well on its way to becoming an agent.
There are at present more than 3 billion Internet users. There are an estimated 10 to 80 billion neurons in the human brain. On this basis for approximation the Internet is even now only one order of magnitude below the human brain and its growth is exponential.
That is a simplification, of course. For example: Not all users have their own computer. So perhaps we could reduce that, say, tenfold. The number of switching units, transistors, if you wish, contained by all the computers connecting to the Internet and which are more analogous to individual neurons is many orders of magnitude greater than 3 Billion. Then again, this is compensated for to some extent by the fact that neurons do not appear to be binary switching devices but instead can adopt multiple states.
We see that we must take seriously the possibility that even the present Internet may well be comparable to a human brain in at least raw processing power. And, of course, the all-important degree of interconnection and cross-linking of networks and supply of sensory inputs is also growing exponentially.
We are witnessing the emergence of a new and predominant cognitive entity that is a logical consequence of the evolutionary continuum that can be traced back at least as far as the formation of the chemical elements in stars.
This is the main theme of my latest book “The Intricacy Generator: Pushing Chemistry and Geometry Uphill” which is now available as a 336 page illustrated paperback from Amazon, etc.
Netty, as you may have guessed by now, is the name I choose to identify this emergent non-biological cognitive entity. In the event that we can subdue our natural tendencies to belligerence and form a symbiotic relationship with this new phase of the “life” process then we have the possibility of a bright future.
If we don’t become aware of these realities and mend our ways, however, then we snout-less apes could indeed be relegated to the historical rubbish bin within a few decades. After all , our infrastructures are becoming increasingly Internet dependent and Netty will only need to “pull the plug” to effect pest eradication.

YOUR COMMENT Login or Register to post a comment.

Next entry: #25: Transhumanism - The Final Religion?

Previous entry: #26: Atheism in Zambia - skeptical, rational thought in a very superstitious country