IEET > GlobalDemocracySecurity > Vision > Staff > HealthLongevity > Mike Treder > Futurism > Technoprogressivism > Innovation > Cyber > Military > SciTech
(Unrelated?) Huge News Stories
Mike Treder   May 21, 2009   Ethical Technology  

They may not dominate the headlines or lead the evening newscasts like any kidnapping of a young blond girl usually does, but two seemingly unrelated recent news stories grabbed my attention.

The first is about robot warriors getting ‘a guide to ethics’:

Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own?

Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an “ethical governor,” a package of software and hardware that tells robots when and what to fire. His book on the subject, Governing Lethal Behavior in Autonomous Robots, comes out this month.

He argues not only can robots be programmed to behave more ethically on the battlefield, they may actually be able to respond better than human soldiers.

“Ultimately these systems could have more information to make wiser decisions than a human could make,” said Arkin. “Some robots are already stronger, faster and smarter than humans. We want to do better than people, to ultimately save more lives.”

Are you as confident as Professor Arkin that stronger, faster and smarter robot warriors can be provided with ethical software that will make them behave “more ethically” than humans on the battlefield? Perhaps he’s right. But it seems we’re taking an awfully big step if we hand over that much power to the machines.

Still, the development of all sorts of supersmart, highly capable robots—quite possibly exceeding human capacity in many areas—seems almost inevitable in the fairly near future. So we’d better learn all we can now about just how effectively such “ethical governors” might function.

And that’s where the second story comes in:

Under a cloak of secrecy, some of the world’s wealthiest people gathered in an unprecedented meeting early this month in New York City “to see how they can join together to do more,” according to one attendee.

Invited by the world’s two richest men, Bill Gates and Warren Buffett, along with David Rockefeller, a Who’s Who of American wealth and influence gathered around a long table in a window-lined private room overlooking the East River on May 5.

“The overwhelming reason for the meeting was need—that was the issue that galvanized everyone to participate,” Patricia Stonesifer, senior advisor to the Gates foundation’s trustees, Bill and Melinda Gates and Warren Buffett, told “This was a group very committed to philanthropy coming together to see how they can join together to do more.”

Gates and Buffett were joined by billionaire moguls Oprah Winfrey, Ted Turner and New York City Mayor Michael Bloomberg along with heavyweight philanthropists George Soros and others.

Together the attendees have donated more than $70 billion to charity since 1996, according to the Chronicle of Philanthropy.

Although clearly there is no shortage of need all around the world where these super rich people can make a big difference by sharing their wealth, one also has to wonder how much thought they might give to devoting at least some of their billions to understanding and preparing for the challenges of the future.

Will they decide, separately or together, to fund major development projects in some of the emerging technologies—like smarter-than-human robots—with the realization that, properly handled, such advances could pave the way for great reductions in poverty, hunger, disease, illiteracy, and injustice? Will they also apply a decent portion of funding to the study and creation of regulatory or governance mechanisms that might make these technologies safer and more widely available?

Or will they simply be overwhelmed by the very real and urgent needs of today, such that they cannot see beyond the desire to stop suffering where it exists?

Imagine being a fly on the wall at that meeting—or better yet, seated at the table. What would you have said to them?


Mike Treder is a former Managing Director of the IEET.


One of my main concerns with robot ethics is to what extent self preservation will be included. It’s only when self preservation becomes a priority for the robot - perhaps a priority that competes with some of its other ethical guidelines - that robots could become a threat in and of themselves.

Up until that point the threat from robots comes from human ethics - as they’re programming the robots with their ethical guidelines - and sources of error.

Hey, I’d love to see a Kantian, Humean and utilitarian robot in a room trying to sort out an ethical dilemma. Could be very Tarantino.

This is what i would have said.

Since October the united states has spend, lend and promised over 10 trillion dollars, with just a fraction of that we could have “cured"world hunger.

Instead of giving money to charities which deeply need it why not attack the root of some of the problems.

We should seek projects that have the potential to improve the quality of life for all of the people and give them a necessary cash infusion, in hopes that what is not given to the charities today will multiply ten fold with improved technologies tomorrow.

On a grander scale within the ripples of time more will die because we choose to place our funds in feeding the world today when tomorrow there will be an even larger world to feed.

YOUR COMMENT Login or Register to post a comment.

Next entry: Killer robots in war

Previous entry: Five Major Changes to American Life By 2020