Emotional Autonomy: Categories and Logics
Christopher Reinert
2013-09-23 00:00:00

This article will be one of a series on Emotional Autonomy and Robotics





Current technologies have made building robots that are autonomous in operation a routine task. Engineers can easily program a service robot to operate in most domestic environments. Autonomous air vehicles can fly and land without the assistance of a human operator. The development and deployment of a robot with this general autonomy is no longer a novel challenge to researchers.



The question becomes, are these systems truly autonomous? While a service robot does not need to understand ethical or emotional situations, would a social robot be considered autonomous if it could not understand emotions? A proposed solution to the problem ethical and emotional autonomy, and one that forms the basis of my research interests, is the development of robotic emotions. An emotionally autonomous system, capable of feeling and understanding emotions, would theoretically act in a moral manner.



If such a system could be built, what type of emotions would the robot have and what are the ethical limitations of such a system?



Categories



When comparing a robot’s abilities to a human’s or animal’s abilities questions of categorical correspondence emerge. The philosophical category of ‘human’ and ‘animal’ are clearly defined and have well known exemplars for observers to refer to. We can easily categorize traits that are unique to a category, shared by a category or unique to specific subsets of the larger category. For example, not all animals have fur. Fur is a trait unique to specific subsets of the category ‘animal.’



These categories inform our moral reasoning. Animal cruelty is considered morally wrong because, like humans, animals are capable of feeling pain. Specific groups within categories receive special moral consideration. For instance, the elderly are accorded with special social privileges and social status based on their age alone.



Where do robots enter into this moral equation? I will give humans the benefit of the doubt and say that socially we have yet to define these categories because we rarely encounter social robots. Humans consider animals and other humans to be moral actors. Could this reasoning be extended to a socialrobot?



I would argue that we could consider a robot moral, but we need clearly define what it means for a robot to be moral. Humans have a clear conception of what human morality is and what animal morality is. We know social animals, like dolphins, elephants, whales and primates, act in ways we consider ethical. They care for the sick and elderly among them, have a concept of fairness, and display social emotions. Their is debate on the ability of these animals to feel empathy.



If your friend claimed that their pet dog displayed guilt when confronted with signs of wrong doing, you would probably believe them. Dogs have a concept of guilt, but whether it is the same as human guilt is debatable. If your friend claimed their dog was a patriotic American, you would find that claim difficult to believe. National pride is a behavior humans engage in.



A priori we understand that we use different standards of evidence when comparing animals to humans. We do not expect animals to feel guilt or shame as we do, but we expect to see it. When robots enter the picture, we are unsure which category to existing category to compare them to.



We do not know if a robot, a nonbiological being, can feel emotions in the same way as a biological organism could. To resolve this ambiguity, some philosophers have suggested creating a special category for robot emotions and morality. Mark Coeckelbergh has suggested that robots are quasi-moral and quasi-emotional agents2.



The quasi-emotional status makes no comment on the robot’s potential conscious state. We may never know if the robot is conscious. Coeckelbergh implies we do not have to know. As long as the robot acts in a consistent and ethical manner humans will treat the robot as an ethical agent.



Critiques



There are researchers who claim that robotic emotions would be inherently deceptive in nature. Noel Sharkey claims that robot designers who give their creations the ability to display emotions are deluding users into believing that the robot capable of emotional expressions when it is not3. Fostering such a delusion is seen as inherently immoral.



I take issue with such a claim for two reasons. First, apply this claim to humans. Imagine coming across an individual in distress. Is their emotional expression born of genuine pain or are they trying to deceive you?



When you replace the robot with a human, the argument becomes insulting. While people do use displays of emotional states deceptively, we can generally assume that expressions are honesty. If we see someone crying, we know they are sad.



Now, a robot may never have a subjective self image, but I would argue that humans would treat a crying robot the same way they would treat a crying baby. In the face of uncertainty, are we really going to waste time debating if an individual has the capability of feeling pain or are we going to address the cause?



The second issue I take with this argument is that it assumes humans are not willing participants in the deception. We could argue that all emotional exchanges are based on some level of deception. I assume you are conscious and capable maintaing your self image. I do not ask you to prove it.



Why then do we apply a different standard of evidence to a robot? Does the robot need to prove it is conscious before receiving help?



While I take issue with Skarkey on this point, I do agree with him on ethical issues regarding robots in the home. A major concern that technologists have to address is the issue of authoritarian technology. An example of an authoritarian device would be a robot that turned off the stove when the owner left the room to answer the phone.



In theory, the robot may be acting in a correct manner. It perceives a threat to the owner’s life. In reality, the owner may find the robot’s behavior to be unhelpful and oppressing. They intended to return to the room upon completion of the call.



There is an ethical middle ground on this issue. The robot could turn off the stove if the user has not returned to the kitchen within a preset time. In theory, the robot would enter the room and wait for fifteen minutes until turning off the stove instead of automatically turning off the stove. This would prevent the robot form interfering with a person’s activity of daily living but keep them safe if they were not present for an extended period.



In addition, there is the issue of privacy that must be addressed. The elderly owner may not want the robot recording every detail of their lives. A designer has to balance an individual’s right to privacy with keeping them safe. A possible solution could be recording at preset times or in preset locations. This allows the user some control over the conditions the interaction will occur in.



 



Further readings:



Noel Skarkey does have some fascinating writings in the ethical use of robots in eldercare settings. He raises valid concerns about the loss of dignity and privacy that roboticists in the context of fundamental human rights.



Mark Coeckelbergh has written extensively on the ethics of technology and robots.




1 Wikipedia.com/Autonomous Robots





2 Coeckelbergh , Moral appearances: Emotions, Robots and Human Morality.





3 Noel Sharkey, The Ethical Frontiers of Robotics