IEET > Rights > HealthLongevity > Economic > Vision > Staff > J. Hughes > FreeThought > CyborgBuddha > Sociology > Philosophy > Psychology > Technoprogressivism > Artificial Intelligence
Enhancing Virtues: Fairness (Pt 2)
J. Hughes   Feb 28, 2015   Ethical Technology  

Fairness is a liberal virtue rooted in instinctive aversion to cheating and inequality, but then filtered through prefrontal cognition.  Since the spread of Enlightenment values fairness has grown in importance as a virtue, especially for liberals with stronger prefrontal cortices and weaker amygdalas. Fairness finds less support among conservatives for whom respect for authority, ingroup loyalty and disgust/sanctity are more neurologically salient. What impact do social policy and individual practices have on the influence of fairness and cognitive biases?

In this series:

Enhancing Virtues: Building the Virtues Control Panel

Enhancing Virtues: Positivity

Enhancing Virtues: Self-Control and Mindfulness

Enhancing Virtues: Caring (Part One)  (Part Two)   (Part Three)

Enhancing Virtues: Intelligence (Part One)   (Part Two)   (Part Three)   (Part Four)

Enhancing Virtues: Fairness (Part One)   (Part Two)   (Part Three)


Building a Fairer Society

Education Much of the spread of the liberal virtues of tolerance, anti-authoritarianism, egalitarianism and secularism can be attributed to rising levels of education, which both spreads those norms and strengthen the prefrontal cognitive faculties and habits of reflection that enable them.  For instance, educational level is the strongest predictor of Americans’ tolerance of sexual and racial minorities and general liberalism, [1][2] and of Europeans’ acceptance of immigrants.[3]  Education  is also a predictor of endorsement of fairness and caring moral intuitions. In an analysis of almost 60,000 people who had taken the Haidt et al. Moral Foundations survey Leeuwen, Koenig, Graham and Park  found that people with more education were more likely to endorse the caring and fairness moral intuitions.[4] 

Class and Social Equality The structure of society, and our position within it, also has a powerful effect on way we view morality and fairness.  People not only become more tolerant as they are exposed to higher education, but also as they become more financially secure[5]  in more equal societies.[6]  Citizens of more equal societies generally are also more supportive of redistributive policies; acceptance of social inequality is both a cause and an effect of actual social inequality.[7] [8]  On the other hand, the affluent – influenced by their vested interest in society – are generally are less supportive of egalitarian redistribution than the poor.[9]

So the natural political polarization on class lines is in between an egalitarian but racial-nationalist, moralistic and authoritarian working class, and tolerant and cosmopolitan but inegalitarian middle and upper classes.[10] There is less of this moral polarization in more equal countries however; the relatively equal Finns and Danes have higher moral consensus around the importance of an equal and tolerant society than the relatively unequal Britons and Swiss.[11]  In other words, social inequality and social class distort the impact of liberal virtues on moral cognition, especially by weakening the egalitarian moral intuitions of educated and affluent cosmopolitans, while liberal virtues are expressed more consistently and broadly in more equal societies.


Training to Reduce Implicit Racial Bias

One especially timely application of fairness enhancement is the attempt to reduce implicit racial biases in policing, spurred by the disproportionate killing of black men by American police. But evidence for the ubiquity of unconscious biases about race, gender and all kinds of things has been accumulating for sixty years, since a study of racialized attitudes towards dolls helped convince the US Supreme Court to decide the desegregation case Brown v Board of Education.  The most common tool used to test for implicit racial bias today is the Harvard Implicit Association Test (IAT). The IAT asks subjects to rapidly match positive and negative words on a computer screen with white or black faces. More rapidly associating positive words with white faces and negative words with black faces (and vice versa) is a measure of unconscious racial bias, which is often at odds with the professed values of the subject. The test finds unconscious negative associations with black faces in both white and black subjects.

Many strategies for reducing biases have been attempted, but only now are we systematically evaluating their efficacy. As with the rethinking of psychotherapeutic approaches to trauma, which has dsicovered that some forms of talk therapy reinforce rather than dampen trauma, research on anti-racism programs has found that some can actually cause resentment and reinforce racial antagonism.[12]  Some of the most effective interventions turn out not to be discussions of racism or the importance of fairness, but rather exercises that bind positive associations with stigmatized groups, such as reading about the heroism of black soldiers, using a black avatar in a video game[13] or imagining oneself being rescued by a black firefighter.[14] Loving-kindness meditation, which explicitly works on associating positive emotions with people you don’t like, has also been found to be effective in reducing implicit racial bias. In one controlled trial that compared whites randomly assigned to practice loving-kindness meditation, talk about loving-kindness or do nothing, the loving-kindness meditators saw significant declines in implicit racism.[15]

These methods work by changing the emotional valence of the stigmas bubbling up from the amygdala. Another approach however is to slow down those reactions and give the prefrontal cortex a chance to intercept and reject the biases. There is evidence, in fact, that people with stronger executive function exhibit less implicit bias.[16] [17] By shining a light of awareness on our biased sentiments we can develop our moral muscles.[18]

One study that demonstrated the effects of bias awareness looked at the calls made by National Basketball Association (NBA) referees before and after a major report on referee racial bias was published. The report showed that referees were more likely to call personal fouls against basketball players who were of a different race than the referee.[19]  The report was released in May 2007, and received a lot of attention in basketball circles. When the team looked for same patterns after the report had been published however, they had disappeared.[20] The referees, along with society, had examined their behavior and overcame their unconscious biases.



Practicing Mindfulness of Biases 

Would the same effect have been achieved however if only the referees had become aware of their racial biases, and not society as well? One suggestive study found that bilingual people make more utilitarian decisions in the trolley dilemma when they use the less used language; having to think harder slows down the instinctive reaction of amygdala to reject pushing the fat man.[21]  There is also evidence accumulating that mindfulness meditation can dampen biases, such as ageism and racism, [22] change political cognition, and actually shrink the size of the amygdala.[23]

In 2013 geneticist James Fowler and some colleagues recruited 139 people for an experiment on the effect of mindfulness on political opinion. The participants were told they would be shown some disgusting images and assigned to one of three groups. The first group was given this instruction to mindfully re-appraise feelings of disgust:

As you view the images, please try to adopt a detached and unemotional attitude. Or, you could think about the positive aspect of what you are seeing. Please try to think about what you are seeing objectively, watch all images carefully, but please try to think about what you are seeing in such a way that you feel less negative emotion.

A second group was instructed to suppress feelings of disgust:

As you view the images, if you have any feelings, please try your best not to let those feelings show. Watch all images carefully, but try to behave so that someone watching you would not know that you are feeling anything at all.

The third group was given no instruction.  Then the three groups were shown images of things like cockroaches and dirty toilets, and asked to fill out the Moral Foundations Questionnaire that Jon Haidt developed to test moral intuitions. The mindful re-appraisal group was significantly less disgusted, and were significantly less likely to express moral purity concerns on the Moral Foundations questions.

Next, they recruited 119 people and first asked them to answer political questions. Then they wired them up to track their heart rate, and asked them a series of questions to measure their sensitivity to disgust, such as whether they would touch a dead body. Then they were randomly assigned to the three groups, re-appraisal, suppression and no instruction, and shown disgusting images.  The mindful re-appraisers’ heart rate did not respond to the images, while the other two groups’ hearts did. Then they were tested on moral intuitions and policy views. Disgust-prone subjects remained more conservative in the suppression and no instruction groups. But for the mindful re-appraisers disgust sensitivity no longer was related to adopting moral and politically conservative views.  


Fairness Reminders and Ethical Assistance Software

In a sense, we have used exocortical aids to improve moral decision-making since the beginning of civilization, in the form of amulets, tattoos, clothing and haircuts designed to remind us and our community of moral commitments. Today the moral exocortex has expanded to include “What Would Jesus Do?” bracelets and electronic Bible and Koran apps. But many secular digital aids are also emerging (Selinger and Seager 2012). The New York State Bar Association, for instance, has created an app that gives users access to more than 900 decisions of their Professional Ethics Committee on issues confronting judges and attorneys (NYSBA 2012). The MoralCompass app provides a flowchart of moral decision-making questions, and the SeeSaw app allows users to query other users about which action they should take in a situation (Statt 2013).

Secular ethics assistants will also likely emerge from the efforts to design “moral machines” (Wallach & Allen 2011) and ethical artificial intelligence (Anderson and Anderson 2007). Some of this work is being done in order to provide onboard rules of engagement for autonomous battlefield robots, but there are moral decision applications being thought about for robots in many occupations, including industry, transportation, and medicine. Should your autonomous car drive you into the river to prevent killing five other?[24] How should a robotic home caregiver react when a demented patient refuses to bathe, eat or take medication?[25] The effort to codify and balance all the factual and value considerations involved in messy, human moral decision-making will be very complicated, and result in multiple possible morality settings, since there is wide moral variability in humans. As Wallach and Allen have argued, the full replication of recognizable human moral decision-making in machines will probably require both human-level cognitive abilities, and the program of character development and moral reasoning  that produces mature morality in humans.   

Eventually, as these morality AIs become more sophisticated, and woven into our environment and exocortices, and then tied directly to our brains, they will become a seamless part of our own cognition, allowing us to choose consciously to achieve levels of moral consistency that are currently impossible for most. [26] 

But what if our inner AI angel reminders aren’t as loud as the persistent voice of our hind brain devils? Are there ways that we can affect the way our brain works to strengthen the hand of fairness and moral cognition?



Implicit Bias Test

White referee

Don’t Hate Meditate

Caregiving robot





[1] Kozloski MJ. Homosexual Moral Acceptance and Social Tolerance: Are the Effects of Education Changing?  Journal of Homosexuality. 2010; 57:1370–1383.

[2] Davis JA. A generation of attitude trends among US householders as measured in the NORC General Social Survey 1972–2010. Social Science Research. 2013; 42: 571–583.

[3] Borgonovi F. The relationship between education and levels of trust and tolerance in Europe. British Journal of Sociology. 2012; 63(1): 146-167.

[4] Van Leeuwen F, Koenig BL, Graham J, Park JH. Moral concerns across the United States: associations with life-history variables, pathogen prevalence, urbanization, cognitive ability, and social class. Evolution and Human Behavior. 2014; 35: 464-471.

[5] Carvacho H, et al. On the relation between social class and prejudice: The roles of education, income, and ideological attitudes. European Journal of Social Psychology. 2013;43: 272–285.

[6] Milligan S. Economic Inequality, Poverty, and Tolerance: Evidence from 22 Countries. Comparative Sociology. 2012; 11(4): 594-619.

[7] Kerr WR. Income inequality and social preferences for redistribution and compensation differentials. Journal of Monetary Economics. 2014; 66: 62-78.

[8] Trump KS. The Status Quo and Perceptions of Fairness: How Income Inequality Influences Public Opinion. Dissertation submitted to the Harvard University Faculty of Arts and Sciences. 2012.

[9] Andersen R, Yaish M. Public Opinion on Income Inequality in 20 Democracies: The Enduring Impact of Social Class and Economic Inequality. AIAS GINI Discussion Paper 48. 2012.

[10] Flavin P. Differences in Income, Policy Preferences, and Priorities in American Public Opinion. Paper presented at the annual meeting of the Midwest Political Science Association 67th Annual National Conference. 2009.

[11] Kulin J, Svallfors S. Class, Values, and Attitudes Towards Redistribution: A European Comparison. Eur Sociol Rev. 2013; 29 (2): 155-167.  doi: 10.1093/esr/jcr046

[12] Moss-Racusin CA, et al. Scientific Diversity Interventions. Science. 2014; 343(7): 615-616.

[13] Peck TC, et al. Putting yourself in the skin of a black avatar reduces implicit racial bias. Consciousness and Cognition. 2013; 22(3): 779–787.

[14] Lai CK, et al. Reducing implicit racial preferences: I. A comparative investigation of 17 interventions.  J Exp Psychol Gen. 2014;143(4):1765-85. doi: 10.1037/a0036260.

[15] Kang Y, Gray JR, Dovidio JF.The nondiscriminating heart: lovingkindness meditation training decreases implicit intergroup bias. J Exp Psychol Gen. 2014;143(3):1306-13. doi: 10.1037/a0034150.

[16] Diamond BJ, et al. Implicit Bias, Executive Control and Information Processing Speed. Journal of Cognition and Culture. 2012; 12(3-4): 183 – 193.

[17] Ito TA, et al. Toward a comprehensive understanding of executive cognitive function in implicit racial bias. Journal of Personality and Social Psychology. 2015; 108(2): 187-218.

[18] Fitzgerald C. A Neglected Aspect of Conscience: Awareness of Implicit Attitudes. Bioethics. 2014; 28(1): 0269-9702.

[19] Price J, Wolfers J. Racial Discrimination Among NBA Referees. The Quarterly Journal of Economics. 2010; 125 (4): 1859-1887.

[20] Pope DG, Price J, Wolfers J. Awareness Reduces Racial Bias. NBER Working Paper. 2013;

[21] Costa A, et al. Your Morals Depend on Language. PLoS ONE. 2014; 9(4): e94842.

[22] Lueke A, Gibson B. Mindfulness Meditation Reduces Implicit Age and Race Bias: The Role of Reduced Automaticity of Responding. Social Psychological and Personality Science. 2014; 1-8.

[23] Holzel BK, et al. Stress reduction correlates with structural changes in the amygdala. SCAN. 2010; 5: 11-17.

[24] Goodall NJ. Machine Ethics and Automated Vehicles. In  Meyer G, Beiker S (eds.), Road Vehicle Automation. Springer. 2014; 93-102.

[25] Lin P, Abney K, Bekey GA. Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press. 2011.

[26] Savulescu J, Maslen H. Moral Enhancement and Artificial Intelligence: Moral AI? Romportl J, et al. (eds.), Beyond Artificial Intelligence. Springer. 2015: 79-95.

James Hughes Ph.D., the Executive Director of the Institute for Ethics and Emerging Technologies, is a bioethicist and sociologist who serves as the Associate Provost for Institutional Research, Assessment and Planning for the University of Massachusetts Boston. He is author of Citizen Cyborg and is working on a second book tentatively titled Cyborg Buddha. From 1999-2011 he produced the syndicated weekly radio program, Changesurfer Radio. (Subscribe to the J. Hughes RSS feed)

COMMENTS No comments

YOUR COMMENT Login or Register to post a comment.

Next entry: 9 “Facts” You Know For Sure About Jesus That Are Probably Wrong

Previous entry: 4 Psychology Myths You Probably Thought Were True