Wendell Wallach on Machine Morality
Ben Goertzel
2011-05-29 00:00:00
URL

book


Wallach's 2009 book, Moral Machines, co-authored with Colin Allen, provides a solid conceptual framework for understanding the ethical issues related to artificial intelligences and robots, and reviews key perspectives on these issues, always with an attitude of constructive criticism. He also designed the first university course anywhere focused on Machine Ethics, which he has taught several times at Yale.

A few years ago, Wendell invited me to speak in Yale’s technology and ethics seminar series (see the slides from the talk here) – which was a rewarding experience for me, due both to the interesting questions from the audience, and also to the face-to-face dialogues on AI, Singularity, ethics, and consciousness that we shared afterwards. Some of the key points from our discussions are raised in the following interview that I did with him for H+ Magazine.








Ben: Ray Kurzweil has predicted a technological Singularity around 2045. Max More has asserted that what’s more likely is a progressive, ongoing Surge of improving technologies, without any brief interval of incredibly sudden increase. Which view do you think is more accurate, and why? — and do you think the different really matters (and if so, why)?

Wendell: I’ve characterized my perspective as that of a “friendly skeptic” – friendly to the can do engineering spirit that animates the development of AI, skeptical that we understand enough about intelligence to create human-like intelligence in the next few decades. We are certainly in the midst of a surge in technological development, and will witness machines that increasingly outstrip human performance in a number of dimensions of intelligence. However, many of our present hypotheses will turn out to be wrong, or the implementation of our better theories will prove extremely difficult. Furthermore, I am not a technological determinist. There are societal, legal, and ethical challenges that will arise to thwart easy progress in developing technologies that are perceived to threaten human rights, the semblance of human equality, and the centrality of humanity in determining its own destiny. Periodic crises in which technology is complicit will moderate the more optimistic belief that there is a technological fix for every challenge.

Ben: You’ve written a lot about the relation between morality and AI, including a whole book, Moral Machines. To start to probe into that topic, I’ll first ask you: How would you define morality, broadly speaking? What about ethics?

Wendell: Morality is the sensitivity to the needs and concerns of others, and the willingness to often place more importance on those concerns than upon self-interest. Ethics and morality are often used interchangeably, but ethics can also refer to theories about how one determines what is right, good, or just. My vision of a moral machine is a computational system that is sensitive to the moral considerations that impinge upon a challenge, and which factors those considerations into its choices and actions.

Ben: You’ve written and spoken about the difference between bottom-up and top-down approaches to ethics. Could you briefly elaborate on this, and explain what it means in the context of AI software, including near-term narrow-AI software as well as possible future AI software with high degrees of general intelligence. What do these concepts tell us about the best ways to make moral machines?

Wendell: The inability of engineers to accurately predict how increasingly autonomous (ro)bots (embodied robots and computer bots within networks) will act when confronted with new challenges and new inputs is necessitating the development of computers that make explicit moral decisions in order to minimize harmful consequences. Initially these computers will evaluate options within very limited contexts.

Top-down refers to an approach where a moral theory for evaluating options is implemented in the system. For example, the Ten Commandments, utilitarianism, or even Asimov’s laws for robots might be implemented as principles used by the (ro)bot to evaluate which course of action is most acceptable. A strength of top-down approaches is that ethical goals are defined broadly to cover countless situations. However, if the goals are defined too broadly or abstractly their application to specific cases is debatable. Also static definitions lead to situational inflexibility.

Bottom-up approaches are inspired by moral development and learning as well as evolutionary psychology. The basic idea is that a system might either evolve moral acumen or go through an educational process where it learns to reason about moral considerations. The strength of bottom-up AI approaches is their ability to dynamically integrate input from many discrete subsystems. The weakness lies in the difficulty in defining a goal for the system. If there are many discrete components in the system, it is also a challenge to get them to function together.

Eventually, we will need artificial moral agents which maintain the dynamic and flexible morality of bottom-up systems that accommodate diverse inputs, while also subjecting the choices and actions to the evaluation of top-down principles that represent ideals we strive to meet. In addition to the ability to reason about moral challenges, moral machines may also require emotions, social skills, a theory of mind, consciousness, empathy, and be embodied in the world with other agents. These supra-rational capabilities will facilitate responding appropriately to challenges within certain domains. Future AI systems that integrate top-down and bottom-up approaches together with supra-rational capabilities will only be possible if we perfect strategies for artificial general intelligence.

Ben: At what point do you think we have a moral responsibility to the AIs we create? How can we tell when an AI has the properties that mean we have a moral imperative to treat it like a conscious feeling agent rather than a tool?

Wendell: An agent must have sentience and feel pleasure and pain for us to have obligations to it. Given society’s lack of concern for great apes and other creatures with consciousness and emotions, the bar for being morally responsible to (ro)bots is likely to be set very high...

READ MORE HERE