The Moral Brain: Day One (J.'s Notes)
J. Hughes
2012-03-30 00:00:00

The conference is being videotaped for availability online.

“Beyond Point-And-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics” Joshua Greene, Department of Psychology, Harvard University

Josh Greene started the cinference off by emphasizing the importance of the Is-Ought distinction, the naturalistic fallacy problem in moral cognition research: finding out how people think about right and wrong doesn't tell us what right and wrong should be. He turned to a metaphor of camera settings of automatic and manual; automatic is very efficient, and manual is very flexible. Moral will requires mental effort, and can get side-tracked by other cognitive demands (e.g. the chocolate cake vs. fruit decision and memorizing number strings). Saying yes to chocolate cake is automatic mode, and it takes the prefrontal cortical efforts of manual mode to choose fruit. The same is true with the trolley experiment difference between rational consequentialist thinking and emotive deontological judgements. People who have brain damage to the emotional processing functions have to rely on manual, rational mode, and find consequentialist judgements a lot easier to arrive at. This is supported by recent research that shows that people who are more reflective about moral decision-making in general also are more likely to end up at a consequentialist conclusion.

However, Guy Kahane has proposed that there may be cases where consequentialism is the easy, emotive response, while the deontological response takes more reflection. But so far tests have not found cases where more reflection leads to deontological conclusions. The ventromedial prefrontal cortex (vmPFC) is a hub that mediates the competing signals from the amygdala (fear, disgust, monkey brain moral sentiments) and the frontal cortical/rational consequentialist signals. If the amygdala signal is weaker, or the vmPFC is damaged, the rational consequentialist brain takes direct control. The origins of these mechanisms may reflect implicit food foraging math of our squirrely ancestors, in that a nut in hand is better than two in the tree, but choosing between five nuts is better than one in the abstract.
Greene then responded to the charge that neuro-moral studies are blurring Is and Ought by saying, no, all the studies do is show that our moral reasoning is often inconsistent in ways that few of us think is morally defensible. This pushes us to deeper reflection, without fundamentally answering what the correct decisions are. By reflecting on the ways that emotional responses lead us to inconsistency we generally do come to more consequentialist conclusions, but that still doesn't really commit the naturalistic fallacy. (Josh Greene speaking at Harvard last year 10min)

James Woodward “Emotion Versus Cognition in Moral Decision-Making: A Dubious Dichotomy” James Woodward, History and Philosophy of Science, University of Pittsburgh

Emotions have been disparaged in psychology and philosophy as primitive, rigid, and likely to lead decision-making astray. But the vmPFC/OFC processes emotional signaling in ways that make them inseparable from "rational" decision-making. Emotional signaling about the valence of various rewards is essential to decision-making, as Damasio's work has shown. Our emotional signaling is built off of "primary reinforcers" – food, shelter, sex – and their association with "secondary reinforcers" - money, prestige, Facebook likes. The frontal cortical structures – OFC/vmPFC - are able to assess the probabilities of various secondary rewards and their value, synthesizing emotional information from the amygdala with more cognitive/rational information such as predictions about other's feelings from the "theory of mind" parts of the brain.

The same brain functions that mediate these daily and economic decisions are involved in moral decision-making. Differences in moral decision-making are partly determined by different weighting on moral valence from the emotional regions. But the dorsolateral PFC, a rational part of the brain, can effect the value weighting from the emotional parts, such as when education about the health consequences of different foods changes the emotional reactions to the foods. In other words, it isn't that there are emotional and rational thinkers, but thinkers for whom their emotions are more or less shaped by their cognition. As Damasio demonstrated, decision-making is impossible without value input.

Liane Young “When the Mind Matters for Morality” Speaker: Liane Young, Assistant Professor of Psychology, Boston College

When studying moral psychology it is helpful to find moral judgments that are content free, and widely predict moral intuitions regardless of ideology or philosophy. An example is the intuition that someone is more morally blameworthy when they do harm intentionally versus inadvertently. In order to make a distinction based on intentionality we need a RTPJ-mediated theory of mind. Experimentally disrupting the RTPJ with transcranial magnetic stimulation reduces their ability to distinguish blame-worthiness on the basis of intentionality. Dr. Young has done a number of experiments attempting to tease apart the difference between intentionality-blame attributions from a very different set of universal moral reactions to eating taboo foods (e.g. dog and horse) and consensual adult incest. Purity violations are less mediated by the RTPJ, so that the intentionality is less important in those cases than the effect. Oedipus was distraught even though he had no idea he was sleeping with his mother, while someone guilty of accidental manslaughter is seen as less blameworthy than the premeditated murderer. Autists also don't distinguish between intentional and accidental harms.

One more mediating factor is that people see less difference in the blameworthiness of their own intentional versus accidental harmful actions, and more for others. Young suggests that this is because we rarely theorize about our own minds, and aren't good at it, while we often theorize about the minds of others. This suggests we just aren't as sure about whether our own actions are accidental and intentional.

James Blair “The Representation of Reinforcement Values in Care-Based Morality & the Implications of Dysfunction for the Development of Psychopathic Traits” James Blair, Chief of the Unit on Affective Cognitive Neuroscience, National Institute of Mental Health, NIH

Psychopaths have an impaired ability to respond to other people's emotions. Children learn to avoid antisocial behavior in part because they are trained to recognize pain in others and feel shame if they cause it. Psychopaths do not react to signs of fear, pain and distress in others, They appear to have impairments in the amygdala responsiveness and in the emotional signaling being processed by the vmPFC. When psychopaths were tested on Jon Haidt's five moral intuitions they found that the more psychopathic, the less they responded to three of the moral intuitions - the two liberal ones, harm/care and fairness, and one of the conservative ones, purity/sanctity. But psychopathy was not associated with the other two "conservative" moral intuitions, ingroup preference and deference to authority. In other words, psychopaths are less likely to be liberals, or to have strong disgust reactions, but are just as likely to be racist, nationalist and might-makes-right.

Differences in the processing of emotional cues suggest that people may develop different moralities: care-based, authority-based and disgust-based moralities, depending on which cues you are more responsive to. Fairness may be correlated with care because we associate injustice with harm, which would explain why liberals are more care/harm and fairness-oriented.

James Blair“Is There One Moral Brain?” Walter Sinnott-Armstrong, Department of Philosophy and the Kenan Institute for Ethics, Duke University

The neuroscientific work on morality has been deconstructing the traditional categories of moral analysis, just as it has done to discussion of memory (semantic memory versus episodic, working versus long-term). Jon Haidt's work suggests five categories of moral thinking, but each of those can be broken down. The virtues and vices are additional categories of moral thinking. Courage, modesty, moderation, and industry are all virtues for some cultures. Sinnott-Armstrong believes all these areas share nothing that makes them all "moral." Care/harm doesn't work as a unifying factor across moral intuitions or virtues - many have nothing to do with harming others. Haidt has suggested that overcoming egoism is the unifying factor, but it doesn't seem like that works for all morals either (although its seems a lot more inclusive - J.) Different brain mechanisms appear to be related to each moral idea - justice, harm, and so on.