Institute for Ethics and Emerging Technologies
IEET > Rights > HealthLongevity > Personhood > Vision > Contributors > Kevin LaGrandeur > Technoprogressivism > Innovation

Print Email permalink (0) Comments (7372) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


The Persistent Peril of the Artificial Slave


Kevin LaGrandeur
By Kevin LaGrandeur
Academia.edu

Posted: Oct 16, 2012

Robots were created to perform the same jobs as slaves—jobs that are dirty, dangerous, or monotonous—thereby freeing their owners to pursue more lofty and comfortable pursuits.

In fact, as Timothy Lenoir notes, the whole field of cybernetics, which includes not just robots but also computer-based Artificial Intelligence (AI) systems, cyborgs, and androids, “was envisioned by scientists and engineers such as Norbert Wiener, Warren McCulloch, and their colleagues at the Macy Conferences [in the 1950s] as a way to maximize human potential in a chaotic and unpredictable postwar world. They wanted to ensure a position of mastery and control removed from the noise and chaos.”

Yet mastery and control are tenuous things. A New York Times article dated 23 May 2009 and entitled “The Coming Superbrain” discusses the dream, or nightmare, of true Artificial Intelligence. No longer the realm of  science fiction, the notion that the servant-systems that we have devised, the increasingly interconnected computer and communications networks, might  spontaneously emerge as self-organizing, self-replicating, and perhaps self- aware appears to be giving Silicon Valley scientists and technology experts conflicting fits of paranoia and joy—depending on their optimism about the controllability of such servant networks. These theorists focus primarily on “strong AI” systems—systems designed to evolve and learn on their own—and believe that they will, once sufficiently developed, evolve at such an exponential rate they will eventually learn to self-replicate and to surpass humans in  intelligence and capability.

The pessimists who worry about the controllability of such systems are numerous, perhaps because so much of the cutting-edge work in AI is funded by the military; and in fact, any truly intelligent artificial servant is most likely to arise from the search for automated weaponry. P.W. Singer points out that  not only is the military the source of most of the money for AI research, but it  has the strong investment motive of the recent wars in the Middle East as well as the most extensive integrative capacity for such research, having already established a network of defense research contractors. Armin Krishnan, in his book about military robotics, concurs with Singer, explaining the dilemma behind this situation:

To Read the Rest of the Essay, with Notes and References CLICK HERE


IEET Fellow Kevin LaGrandeur is a Faculty Member at the New York Institute of Technology. He specializes in the areas of technology and culture, digital culture, philosophy and literature.
Print Email permalink (0) Comments (7373) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Using Neurotechnologies to Develop Virtues - A Buddhist Approach to Cognitive Enhancement (Part 1)

Previous entry: How To Make a Spy Exhibit Boring

HOME | ABOUT | STAFF | EVENTS | SUPPORT  | CONTACT US
JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Executive Director, Dr. James J. Hughes,
35 Harbor Point Blvd, #404, Boston, MA 02125-3242 USA
Email: director @ ieet.org