The Moral Brain: Day Two Morning (J.'s Notes)
J. Hughes
2012-03-31 00:00:00

“The Moral Life of Babies and Why It Matters” Paul Bloom, Psychology & Cognitive Science, Yale University

Babies have moral judgments demonstrated in laboratory experiments. Showing babies under one year puppets or cartoon characters that act in kind or spiteful ways they will prefer the kind over the spiteful and favor punishment of meanness. On the other hand Fehr's group in Switzerland has tested the development of egalitarian preferences in children 3-8 years old. Until seven or eight children have little preference for egalitarian outcomes. Children prefer relative advantage over others, even at a cost to themselves, until they are about eight. Babies recognize equality and prefer egalitarian outcomes for themselves, but feel no shame about grabbing all the cookies until they are eight. This suggests that the development of moral sensibilities around kindness is different from fairness, and these are different from the innate sensibilities about disgust etc.

Disgust sensitivity is a strong predictor of attitudes towards sexual and ethnic minorities, for instance, and triggering disgust feelings with smells and stickiness makes people more conservative on those dimensions. The disgust sensibility is clearly innate for some things - the smell of vomit - but has to be trained to be associated with other things. The difference between these "moral" sensibilities is that care/harm and fairness are important to living together in community, and are still functional, while disgust sensibilities are probably rooted in the avoiding of disease and are largely now dysfunctional. Although Leon Kass and Mary Douglas have argued for a more functional origin and contemporary utility to disgust, Martha Nussbaum is correct that it really leads us astray.

“Feeling Good About Feeling Bad: Moral Aliefs and Moral Dilemmas” Tamar Gendler, Professor of Philosophy, Yale University

Philosophers from Plato to contemporary neuroethicists have distinguished between rational ethical control and habit, the passions, the non-rational sentiments. Likewise some of our beliefs are rational while others are not. Gendler has coined the term "alief" for non-rational, implicit beliefs: fear of things that you know aren't really dangerous (fear of heights), belief in the efficacy of actions that you know are ineffective (yelling at the TV). Implicit racial bias, such as the demonstrated bias to shoot or convict black versus white men, are an example of immoral aliefs.

For Aristotle the virtuous person - the sophron - has carefully cultivated instinctive responses that are consistent with virtue, not the person who is constantly struggling to suppress immoral sentiments and urges. For Kant on the other hand the person who has a natural inclination to virtue, without any inner struggle, is not really virtuous; only the person who is consciously mastering contrary inclinations - the enkrates - is virtuous. The situationists challenge Aristotle's theory of moral character by showing that unconscious virtuous inclinations are not virtuous in all situations, and there is little evidence that there are people who are compassionate or patient or courageous in all situations. On the other hand fluent social interaction requires Aristotleian entrainment so that moral behavior is automatic and unconscious. There are cases where our habitual moral sentiments and behavior sometimes do not give us good guidance, and they have to be overridden by conscious moral reflection.

Since there are atypical situations that require rapid, fluent, unconscious action, such as successfully killing the enemy while obeying the rules of war in a battle, we need to continually cultivate discriminating wisdom about which situations require which kinds of responses. Automatically learning to identify and value the healthiness of food is an example. On the other hand we dislike and distrust people who make difficult traguic trade-off decisions easily and without any sign of moral distress (e.g. easily killing one to save many), and empirically psychopaths find utilitarian trolley and lifeboat choices easy. In that case the conflict is between moral demands. But we have no problem with people who have no distress in suppressing self-interest or feelings of disgust in order to do good.

“Morphing Morals: Neurochemical Modulation of Moral Judgment and Behavior” Molly Crockett, Department of Economics, University of Zurich

People who are more emotionally aroused when they see harms are less likely to want to harm others in moral dilemmas, such as by pushing a fat man on the trolley track. People with less serotonin are less sensitive to harming others, and increasing people's serotonin decreases their willingness to harm others. This effect interacts with the subject's natural level of empathy - low empathy people change less with serotonin enhancement than high empathy people. Serotonin also effects our willingness to punish others for unfair behavior. In the Ultimatum Game one subject gets to cut the pie and make an offer to the respondent. If it is refused neither person gets any pie. Rationally the responder should accept any offer, since they will be better off, but people reject unfair offers, accepting the cost of punishing the unfairness. When responder have lower levels of serotonin they are less tolerant of unfair proposals, and more willing to punish the unfairness.

But punishment of unfairness however can result from both the desire for revenge and the desire to support pro-social norms. Brain scans show that the revenge part of the brain lights up when they reject unfair offers, and serotonin depletion increases these revenge motives and blunts the brain's pleasure at fair offers - lower serotonin increases resentment and revenge, but actually reduces a desire for justice.. When the subject is given the opportunity to punish unfairness that doesn't effect them directly people with more serotonin are more willing to punish unfairness. (BTW serotonin falls when people have less tryptophan in their diet, and tryptophan is mainly found in meat. So vegetarians might have different moral sentiments). Serotonin levels are also effected by a number of social factors, such as stress and status, and serotonin has a wide range of effects on mood and self-control. Serotonin also increases connectivity between parts of the brain, so part of the effect would be serotonin increasing the ability of the neocortex to regulate emotional responses, like revenge, and boost abstract moral codes, like justice.

“Are Intuitions Heuristics?” S. Matthew Liao, Professor of Philosophy, New York University

Liao starts by reviewing Josh Greene's argument that the automatic "deontological" judgments, motivated by the emotive parts of the brain, are less ethical than the manual "utilitarian" judgments motivated by the rational, neocortical parts of the brain. Along the same lines Kahneman and Tversky have shown that the brain is subject to many intuitive biases that lead to statistical and judgment error: confirmation bias, availability heuristic, representativeness heuristic, etc. Some argue however that moral intuitions and their heuristics can be reliable guides to moral choice. Gigerenzer however points to cases where use of a heuristic leads to the correct answer, and Malcolm Gladwell points to cases where first thought is more accurate than decisions after deliberation. However sometimes deductive reasoning is what is happening unconsciously, as opposed to irrational heuristics, and there are many moral dilemmas that require slow deliberate thought. Moral intuitions may reflect unconscious rational processes that the person consciously rejects, such as the doctrine of double effect. This all argues that we may need to unpack the distinction between fast effortless intuition and slow effortful reasoning.