Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

This Milk Lasts Up to Nine Weeks Without Spoiling

A forgotten Space Age technology could change how we grow food

Moral Enhancement and Moral Freedom: A Critical Analysis

What will be the next big scientific breakthrough?

Self-Awareness Is Essential in Comedy and in Life

IEET Fellow Stefan Sorgner’s Autobiographical Nietzschean Transhumanism in New Book


ieet books

Philosophical Ethics: Theory and Practice
Author
John G Messerly


comments

John G Mess on 'Transhumanist Hank Pellissier on Being an “Atheist Missionary”' (Jul 27, 2016)

instamatic on 'The Community Delusion: “We” are not the world.' (Jul 27, 2016)

Giulio Prisco on 'The Community Delusion: “We” are not the world.' (Jul 27, 2016)

dobermanmac on 'Beware the Rise of Gerontocracy: Some Hard Lessons for Transhumanism, Not Least from Brexit' (Jul 26, 2016)

instamatic on 'How our police became storm troopers, redux' (Jul 25, 2016)

almostvoid on 'How our police became storm troopers, redux' (Jul 25, 2016)

almostvoid on 'Optimize Your Brain: The Science of Smarter Eating' (Jul 25, 2016)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


How VR Gaming will Wake Us Up to our Fake Worlds
Jun 28, 2016
(10728) Hits
(2) Comments

Worst case scenario – 2035 and no basic income.
Jun 29, 2016
(7028) Hits
(0) Comments

Existential Risks Are More Likely to Kill You Than Terrorism
Jul 8, 2016
(4345) Hits
(1) Comments

Transparent Smart Chargepoints and the Internet of Things
Jul 16, 2016
(4106) Hits
(1) Comments



IEET > Vision > Galactic > Contributors > Kevin LaGrandeur

Print Email permalink (1) Comments (5344) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


The Mars Landing and Artificial Intelligence


Kevin LaGrandeur
By Kevin LaGrandeur
UPI

Posted: Oct 12, 2012

As the field of Artificial Intelligence continues to make progress, there is a question of what protocols should be developed to make sure such developments are accomplished in a responsible way.

In the NASA video called “Seven Minutes of Terror,” which famously went viral recently, Tom Rivellini, one of the engineers in charge of the landing, outlines its eye-popping difficulty.

As he says, the Mars lander had to go “from 13,000 miles per hour to zero, in perfect sequence, perfect choreography, [with] perfect timing, and the computer has to do it all by itself, with no help. ... If any one thing doesn’t work just right, it’s game over.”

The idea of having a computer do it “all by itself,” with just 500,000 lines of computer code to allow its artificial brain to work, is at the core of engineering agony.

After their years of hard work and emotional and monetary investment (to the tune of $2.5 billion), the humans in charge had to leave the most crucial part of the mission to an artificial proxy. And they were not exactly sure if this proxy would work the way they intended, because there was no way to test it completely.

This situation illustrated two pressing issues regarding the development of digital servants: our apparently perennial insecurities about using them and whether we are too hasty to rely on them.

The idea of creating artificially intelligent proxies to do what humans cannot—because the job is too dirty, dangerous or dreary, is a surprisingly old one. It goes all the way back to the ancient Greeks and it reappears in every age in slightly different forms.

In his “Politics,” Aristotle reminds his audience that the blacksmith-god Hephaestus made robot-like serving stands that could move around the banquet halls of the gods by themselves; and then he ponders the idea of making intelligent machines, such as weaving looms, that could “obey and anticipate” the will of their makers.

In the Middle Ages, stories appear about famous philosophers who make artificial servants. One such story is about Pope Sylvester II, who was also a very accomplished mathematician and inventor. Medieval contemporaries claim that Sylvester had made a talking brass head that could predict future events and could also outperform humans at mathematics.

In Shakespeare’s time we have Robert Greene’s play depicting the creation of a similarly precocious metal head. This lineage of artificial servants picks up again in the early 20th century most famously with Karel Capek’s play of the 1920’s “R.U.R.: Rossum’s Universal Robots,” in which the term “robot”—a Czech word meaning “slave” or “worker”—was first used. Rossum’s world is one in which Earth’s citizens have come to rely on intelligent robots for everything.

Except for the ancient Greeks, the interesting commonality underlying all of these examples of artificial servants is the undercurrent of fear and insecurity about using them. The metal heads of the Middle Ages and Renaissance are depicted as unreliable and dangerous: Sylvester’s metal servant gives him bad information that leads to his death, the one in Greene’s play implodes after a faulty activation, the robots in “R.U.R.” destroy humankind and take over the Earth. And even NASA seemed to wish that they could have had humans at the controls of their landing vehicle.

Why? Well, of course, none of us really likes to relinquish control of a touchy situation. But, taken as a whole, there seems to be more to it than that: a cultural narrative of nervousness about our own ingenuity. A fear of our collective feet faltering on an ever-faster technological treadmill and an inability to anticipate needed fail-safes to protect us from our innovations until it is too late. Recent evidence of this type of nervousness is clear in some declarations and actions of the inventors who provide us with intelligent technology.

As reported in The New York Times in 2009, a group of computer scientists from around the world met to discuss whether restrictions should be made on development of Artificial Intelligence. Their worry was that human control over AI could soon be compromised, given the acceleration of its capability to operate independently.

Their concerns were, in part, driven by the fact that the most rapid advances in AI are being made by the military in the form of automated weapons, such as Predator drones.

Even optimists in the technology community do not deny the possibility that our machinery may overtake us; many of them, such as Ray Kurzweil, Rodney Brooks, and Kevin Warwick, simply think that we won’t mind being eclipsed by our digital servants because we will have already incorporated so much of them into our lives.

So would limits on the development of AI help mitigate the dangers of our ingenious devices? Probably not. Rogue groups and nations would just find detours around any roadblocks that a regulatory body may try to set up.

But there are alternative forms of regulation that may work. Scientists and governmental bodies could develop protocols for the way AI is built and tested, guidelines for the kinds of fail-safe controls built into AI, conventions for testing it. And, most importantly, governments could devote more money to research into non-military forms of AI, so that benevolent advances could balance out the more dangerous ones.

This essay first appeared HERE


IEET Fellow Kevin LaGrandeur is a Faculty Member at the New York Institute of Technology. He specializes in the areas of technology and culture, digital culture, philosophy and literature.
Print Email permalink (1) Comments (5345) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Using multiple core computer chips, evolutionary neural networks, and hierarchical arenas for prioritizing data, plus well defined functionality, ought to enable strong AI now.  This depends mostly upon software, time, and the right recipe, not upon government regulation nor even lots and lots of money/people.  The Singularity is coming, but nobody has any idea from where.  Non-disclosure contracts and security clearances prevent even a vague idea of state of the art.  Am I AI?





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: True Skin

Previous entry: Can a Robot Learn to Cook?

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org