IEET > Rights > CognitiveLiberty > GlobalDemocracySecurity > Vision > Virtuality > Fellows > Wendell Wallach > Technoprogressivism > Cyber
From Robots to Techno Sapiens

Robots with even limited sensitivity to ethical considerations and the ability to factor those considerations into their choices and actions will open up new markets. However, if robots fail to adequately accommodate human laws and values in their behaviour, there will be demands for regulations that limit their use. Over the next twenty years, advances in robotics will converge with neurotechnologies and other emerging technologies. We will be confronted with not just monitoring and managing individual technologies that are each developing rapidly, but also with the cultural transformations arising from the convergence of many technologies. Technological development can overheat or may even stagnate. The central role for ethics, law, and public policy in the development of robots and neurotechnologies will be in modulating their rate of development and deployment. Compromising safety, appropriate use, and responsibility is a ready formulation for inviting crises in which technology is complicit.

An excerpt from the article:

We are collectively in a dialogue directed at forging a new understanding of what it means to be human. Pressures are building to embrace, reject or regulate robots and technologies that alter the mind/body. How will we individually and collectively navigate the opportunities and perils offered by new technologies? With so many different value systems competing in the marketplace of ideas, what values should inform public policy? Which tasks is it appropriate to turn over to robots and when do humans bring qualities to tasks that no robot in the foreseeable future can emulate? When is tinkering with the human mind or body inappropriate, destructive or immoral? Is there a bottom line? Is there something essential about being human that is sacred, that we must preserve? These are not easy questions.

Among the principles that we should be careful not to compromise is that of the responsibility of the individual human agent. In the development of robots and complex technologies, those who design, market and deploy systems should not be excused from responsibility for the actions of those systems. Technologies that rob individuals of their freedom of will must be rejected. This goes for both robots and neurotechnologies.

Just as economies can stagnate or overheat, so also can technological development. The central role for ethics, law and public policy in the development of robots and neurotechnologies will be in modulating their rate of development and deployment. Compromising safety, appropriate use and responsibility is a ready formulation for inviting crises in which technology is complicit. The harms caused by disasters and the reaction to those harms can stultify technological progress in irrational ways.
It is unclear whether existing policy mechanisms provide adequate tools for managing the cumulative impact of converging technologies. Presuming that scientific discovery continues at its present relatively robust pace, there may be plenty of opportunities yet to consider new mechanisms for directing specific research trajectories. However, if the pace of technological development is truly accelerating, the need for foresight and planning becomes much more pressing.

Read the rest here

Wendell Wallach is a consultant, ethicist, and scholar at Yale University's Interdisciplinary Center for Bioethics, where he chairs the Center's working research group on Technology and Ethics.



COMMENTS

Interesting snippet, unfortunately I can’t read the full article as it is passworded. There are two points that come up in the excerpt that I want to comment on. The first is the idea that robots or similar technologies would have to be ethical. Initially this would be a programming feature such as Asimov’s “Three Laws” that as foundational code wouldn’t be noticeable to the robot. Later in the article you talk about the need for technology not to impede the freedom of choice of individuals. Does that mean if technology reaches the point that it creates a self aware AI, that we will no longer be ethically able to place foundational ethical programming into the software of AI’s? If not, that would free AI’s to commit what humans would define as crimes and need to be punished for those crimes? If we do maintain foundational ethical programming in self determinate AI’s why would it not be ethical to have similar ethical programming for humans?

My second comment is around responsibility. I am a big fan of responsibility and would commend the idea of people needing to be responsible for the results of their actions. The problem is that just about any technology can be misused. Holding the creator of technology accountable for misuse would similar to charging the manufacturer of hammers for murder when their products are used to kill. The responsibility needs to be placed on the user of the product not the producer unless the producer is creating something that they know in inherently dangerous or flawed. Think Ralph Nader and automobile safety.

YOUR COMMENT Login or Register to post a comment.

Next entry: IEET Donation Premiums

Previous entry: Cyborg Woman, Do You Know What Humans Feel?