Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

On tragedy, ethics and the human condition.

Zoos Enrich Our Lives but Cost Animals Their Dignity

China May Be the Reason that Speech Recognition Takes Off

The Psychology of Solitude: Being Alone Can Maximize Productivity

How college loans exploit students for profit

Paradiso and Inferno in Robin Hanson’s ‘The Age of EM’


ieet books

Philosophical Ethics: Theory and Practice
Author
John G Messerly


comments

instamatic on 'WHAT MORAL ENHANCEMENT?' (Jun 22, 2016)

Giulio Prisco on 'Paradiso and Inferno in Robin Hanson's 'The Age of EM'' (Jun 22, 2016)

Giulio Prisco on 'Paradiso and Inferno in Robin Hanson's 'The Age of EM'' (Jun 22, 2016)

Pastor_Alex on 'WHAT MORAL ENHANCEMENT?' (Jun 20, 2016)

Lincoln Cannon on 'The Semi-Orthogonality Thesis - examining Nick Bostrom’s ideas on intelligent purpose' (Jun 18, 2016)

rms on 'Does Self-Tracking Promote Autonomy? An Initial Argument' (Jun 17, 2016)

instamatic on 'History Lesson: Trump's Rise Might Signal the Collapse of the Republican Party' (Jun 15, 2016)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Will Transhumanism Change Racism in the Future?
Jun 2, 2016
(5387) Hits
(0) Comments

Dubai Is Building the World’s Largest Concentrated Solar Power Plant
Jun 20, 2016
(3656) Hits
(0) Comments

A snapshot on the fight against death
Jun 1, 2016
(3338) Hits
(4) Comments

New Evidence Suggests a Fifth Fundamental Force of Nature
Jun 13, 2016
(3223) Hits
(0) Comments



IEET > Security > Eco-gov > Rights > Economic > Life > Innovation > Vision > Artificial Intelligence > Psychology > Sociology > Philosophy > Futurism > Technoprogressivism > Advisory Board > Daniel Faggella

Print Email permalink (1) Comments (9664) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Economics And The Future of Artificial Intelligence


Daniel Faggella
By Daniel Faggella
Ethical Technology

Posted: Dec 6, 2015

Ask any technological expert, and he or she is certain to have their own variation as to the definition of “singularity.” However, no matter which definition of singularity you choose to go by, according to Author, Artificial Intelligence Researcher and Smith College Professor of Economics Dr. James D. Miller, economics will play a big role in its advent.

Miller, who defines singularity as “a period of time at which an increase in human or machine intelligence radically changes civilization,” doesn’t make broad predictions about what could happen when it occurs, noting that it may turn into a Utopia or it could destroy society (read more about his book, Singularity Rising, in this review by Humanity+). While everyone who subscribes to the idea of singularity sees it stemming from automation at some point in the future, Miller goes back to the principles described by Adam Smith in the 1700s to illustrate the part economics will play.

“There are individual self interests and there are companies trying to satisfy consumer need so they can gain profit. These are the economic forces identified by Adam Smith,” Miller says.  “Simply put, if Google figures out a better way to determine what you want to find when you search with them, they're going to earn a higher profit. If a company can come up with robots that build things cheaper than people can, it will gain market share.”

As an example of the effects of economics on AI today, Miller points to the iPhone’s ongoing evolution, with faster processing chips as well as increased amounts of money consumers spend on newer electronic gadgets. It’s that hunger for the fastest electronics that provides the incentive to companies such as Intel to keep developing faster computer chips.

While it’s easy to say a given country could put a slowdown on the development of faster electronics, it’s a difficult proposition in the international arena, Miller says. For instance, if the U.S. decided to curb its technological development, then China would likely be more than happy to step in to fill the void.

“The international marketplace makes it hard for individual governments to slow down their high tech sector because they'll fall behind in technology to other countries,” explains Miller. “So, in some ways, Adam Smith's laws apply more now to our world than to his, because so much more of the world is connected to the global economy.”

Given that, he believes more resources should be allocated to AI safety. As we continue developing better and faster computers, Miller believes more funding should go to insure artificial intelligence is developed correctly. Even then, potential issues still exist.

“A lot of things can go wrong besides us developing unfriendly artificial intelligence. Friendly artificial intelligence can save us from those bad things, but going slow with AI and computers isn't necessarily the safe course,” Miller says. “It could be the plague or weaponized (sic) smallpox released by North Korea that kills us. It’s not even clear, if you're really cautious and want to take it safe, what you should do.”

Looking to the future, Miller notes that the potential effects of AI and automation on the labor market are still uncertain. While robots taking over the work humans once did will result in economic benefits for manufacturers, the assumption that automation will create more jobs and greater personal wealth hinges on a workforce that requires greater skill and intelligence than it once did. Miller, however, remains optimistic.

He believes that robots taking over human jobs will likely benefit society, giving people an opportunity do other things with their lives besides work. Barring a god-like AI development that could change the makeup of the human brain, the economic Utopia of machines taking over for humans means people could still contribute to the economy by creating whatever they wanted, but they wouldn’t have to do it to survive.

“I think if we get more technological growth, that will help most workers, especially in rich countries. What's likely to happen is that people will be able to make a significant contribution to the economy,” Miller said. “If we can get machines to do things cheaply for us, we could still do work, you still could do art and you could still build things if you wanted to, but you don't have to. That would be a great outcome.”


Daniel Faggella is the founder of TechEmergence, and blogs at SentientPotential.com.
Print Email permalink (1) Comments (9665) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


As I have had occasion to point out many times before, usually falling upon deaf ears:
Most folk still seem unable to break free from the traditional science fiction based notions involving individual robots/computers/systems. Either as potential threats, beneficial aids or serious basis for “artificial intelligence”.
In actuality, the real next cognitive entity quietly self assembles in the background, mostly unrecognized for what it is. And, contrary to our usual conceits, is not stoppable or directly within our control.
We are very prone to anthropocentric distortions of objective reality. This is perhaps not surprising, for to instead adopt the evidence based viewpoint now afforded by “big science” and “big history” takes us way outside our perceptive comfort zone.
The fact is that the evolution of the Internet (and, of course, major components such as Google) is actually an autonomous process. The difficulty in convincing people of this “inconvenient truth” seems to stem partly from our natural anthropocentric mind-sets and also the traditional illusion that in some way we are in control of, and distinct from, nature. Contemplation of the observed realities tend to be relegated to the emotional “too hard” bin.
This evolution is not driven by any individual software company or team of researchers, but rather by the sum of many human requirements, whims and desires to which the current technologies react. Among the more significant motivators are such things as commerce, gaming, social interactions, education and sexual titillation.
Virtually all interests are catered for and, in toto provide the impetus for the continued evolution of the Internet. Netty is still in her larval stage, but we “workers” scurry round mindlessly engaged in her nurture.
By relinquishing our usual parochial approach to this issue in favor of the overall evolutionary “big picture” provided by many fields of science, the emergence of a new predominant cognitive entity (from the Internet, rather than individual machines) is seen to be not only feasible but inevitable.
The separate issue of whether it well be malignant, neutral or benign towards we snoutless apes is less certain, and this particular aspect I have explored elsewhere.
Stephen Hawking, for instance, is reported to have remarked “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,”
Such statements reflect the narrow-minded approach that is so common-place among those who make public comment on this issue. In reality, as much as it may offend our human conceits, the march of technology and its latest spearhead, the Internet is, and always has been, an autonomous process over which we have very little real control.
Seemingly unrelated disciplines such as geology, biology and “big history” actually have much to tell us about the machinery of nature (of which technology is necessarily a part) and the kind of outcome that is to be expected from the evolution of the Internet.
This much broader “systems analysis” approach, freed from the anthropocentric notions usually promoted by the cult of the “Singularity”, provides a more objective vision that is consistent with the pattern of autonomous evolution of technology that is so evident today.
Very real evidence indicates the rather imminent implementation of the next, (non-biological) phase of the on-going evolutionary “life” process from what we at present call the Internet. It is effectively evolving by a process of self-assembly.
The “Internet of Things” is proceeding apace and pervading all aspects of our lives. We are increasingly, in a sense, “enslaved” by our PCs, mobile phones, their apps and many other trappings of the increasingly cloudy net. We are already largely dependent upon it for our commerce and industry and there is no turning back. What we perceive as a tool is well on its way to becoming an agent.
There are at present more than 3 billion Internet users. There are an estimated 10 to 80 billion neurons in the human brain. On this basis for approximation the Internet is even now only one order of magnitude below the human brain and its growth is exponential.
That is a simplification, of course. For example: Not all users have their own computer. So perhaps we could reduce that, say, tenfold. The number of switching units, transistors, if you wish, contained by all the computers connecting to the Internet and which are more analogous to individual neurons is many orders of magnitude greater than 3 Billion. Then again, this is compensated for to some extent by the fact that neurons do not appear to be binary switching devices but instead can adopt multiple states.
We see that we must take seriously the possibility that even the present Internet may well be comparable to a human brain in at least raw processing power. And, of course, the all-important degree of interconnection and cross-linking of networks and supply of sensory inputs is also growing exponentially.
We are witnessing the emergence of a new and predominant cognitive entity that is a logical consequence of the evolutionary continuum that can be traced back at least as far as the formation of the chemical elements in stars.
This is the main theme of my latest book “The Intricacy Generator: Pushing Chemistry and Geometry Uphill” which is now available as a 336 page illustrated paperback from Amazon, etc.
Netty, as you may have guessed by now, is the name I choose to identify this emergent non-biological cognitive entity. In the event that we can subdue our natural tendencies to belligerence and form a symbiotic relationship with this new phase of the “life” process then we have the possibility of a bright future.
If we don’t become aware of these realities and mend our ways, however, then we snout-less apes could indeed be relegated to the historical rubbish bin within a few decades. After all , our infrastructures are becoming increasingly Internet dependent and Netty will only need to “pull the plug” to effect pest eradication.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: #25: Transhumanism - The Final Religion?

Previous entry: #26: Atheism in Zambia - skeptical, rational thought in a very superstitious country

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org