After a breakfast bagel feast on the ground floor Green Room of NYU’s Silver Building, the early-Sunday morning audience trooped up to the 7th floor auditorium to hear the final day’s lecturers. Here’s the better-brain information I gleaned:
“The Benefits and Risks of Virtue Engineering” by James Hughes, IEET’s Executive Director
Dr. J. placed the current bioethics debate into an historical context by noting that neurotechnology has always been “emerging” and society has always used neuro-techniques (plus enculturation) on our brains, to transform us into social animals that cooperate with civilization’s dictums. Two neuro-techniques historically used in “virtue engineering” - he pointed out - are Ayurvedic vegetarianism, and fasting. Additionally, Dr. J noted that society limits substances that cause neuro-damage, i.e., moral degeneration, via prohibiting drugs and alcohol. These laws perform the function of “enhancing virtue” by “suppressing vice.” Summarizing this observation, he noted that humanity is generally excellent at censoring drugs for moral reasons, but we’re not eager to prescribe pharmaceuticals, to accomplish identical goals.
Hughes relayed a relevant personal anecdote next. In his childhood, after he was diagnosed with ADD, he regarded his daily ingestion of Ritalin as his “moral responsibility.” Referring to this, he chastised bio-conservatives who insist that the use of medication for moral reasons is a violation of human rights. Instead, he insisted, it should be viewed as allowing individuals to be socially responsible.
Returning to historical precedents, he noted that intellectual accomplishments of The Enlightenment have frequently been connected to the emergence of coffee (a “neuro-enhancer”) as a popular beverage. Coffee, it was widely recognized, encouraged open-mindedness and “makes the genius quicker.” Perhaps a future Enlightenment awaits humanity if additional neuro-enhancers are available? Briefly Hughes mentioned the possible potential of oxytocin and serotonin. Additionally, he pointed out that the use of “smart drugs” is regarded as “Cognitive Liberty”… so why aren’t pharmaceuticals viewed as equally valid, for Moral Progress? An example of this incongruence, he notes, is the ACLU’s opposition to chemical castration for rapists; the organization refuses to perceive the medication as a “moral enhancement drug.”
Referring to “Clockwork Orange” Hughes notes that the movie is deeply sympathetic to the protagonist, who is a thoroughly nasty hoodlum in need of significant moral enhancement. Despite this, audiences are cinematically-manipulated to deplore neuro-enhancement as a way to better his behavior. A final observation Dr. J makes is he hopes the treatment for transgendered people becomes quicker and less expensive than the present 1-year and $50,000 burden.
“Perhaps It Would Help to Distinguish Between “Engineering” and “Cultivating” Virtue” Erik Parens, The Hasting Center
The bioethics debate about enhancement has moved beyond the “First Wave” of the 2000s, when enthusiasts (such as John Harris and Julian Savulescu) squared off against critics (Leon Kass, etc.) - to a new more nuanced debate, in the present-day “Second Wave.” In the new debate, Parens noted, enthusiasts have accepted that there are concerns about unintended consequences, coercion, and authenticity, while many critics have conceded that true enhancement, if it were possible, is desirable, and that biochemical treatments, are not intrinsically bad. The conversation is advancing now, with different debates. One debate mentioned is that it is ethically different to use drugs to Maintain Love than it is to use drugs to Create Love. Why are they different? Because, Parens claims, we want to engage in activities as we really are. Plus, there’s a general agreement that the use of drugs to create new love is more likely to create false love, and, he claims, “we want to love others who deserve love, and we want to be loved because we deserve it.” He also noted that even David Levy has said, “we don’t want to be in love with robots.” Another observation by Parens is, “Nobody wants soma” because all are concerned about the loss of freedom.”
“Seeing a Person as a Body” Joshua Knobe, Cognitive Science & Philosophy, Yale University
Knobe turned to educational moral enhancement, by focusing on the difference in moral cognition when we think of people as embodied versus as a mind (Using Noam Chomsky and Megan Fox and excellent examples). Thinking of people as bodies - identifying people with their physical attributes rather than their personality attributes - is often seen as less moral. But when people create a model of people they may focus on the phenomenology (the body and its feelings) or the intentionality (the mind). People also attribute intentionality but not phenomenology (a mind but no feelings) to corporations and robots. He noted that brain studies in both the US and Hong Kong have discovered that the same parts of the brain activate when thinking about the intentional states of persons and corporations. In one set of studies people were shown head shots versus torso shots and asked to speculate on the person’s intentionality and phenomenology. When the subject sees the torso they think less about the person’s intentional states and more about their feelings. The more “pornographic” an image is the less people are seen as rational actors, and the more they are seen as feeling persons, more capable of feeling pleasure and pain. The research suggests there are two distinct processes involved in the “theory of mind,” processes that trade-off between whether we focus on the intentions of others or on their feelings.
This speaker noted that there is ambiguity in the concept of pro-sociality; do we want people to be “nice,” empathetic, cooperative, etc.? What is the role of righteous anger if nice people don’t get angry? Ms. Pacholczyk also discussed how important context is when we’re discussing morality, and about whether or not we’re in agreement about what “morality” actually means. She posed crucial questions that scrutinized the value of morality itself, such as “why is it rational for us to trust others?” Similarly, she noted that supposedly antisocial emotions like anger are not morally problematic themselves, because they can be used to express discontent with injustice. Even violent behavior can be justified, if it leads to an outcome that improves the morality of the greater society.
The next speaker suggested that there will be moral enhancement eventually, and numerous different modalities. Shook isn’t skeptical on technical grounds, but on the grounds of the constantly changing and contention over what morality is. Ethical systems have models about how people reason and how they become more moral which could be empirically disconfirmed. For instance Kantian theory bears little relationship to actual moral psychology, while the cognitive demands of utilitarian calculation may be impossible. One response would be to make ethical theory more closely reflect actual moral cognition, although this is not very attractive since actual moral cognition isn’t very pretty or consistent or defensible. There could be an adaptation of moral theory to fit with the neuroscientific evidence by judging consistent and defensible ethical systems as more or less consistent with neuroscience. it seems unlikely that neuroscience will really validate one moral theory over another - they all have neurological evidence to support them, leading to more neuro-ethical pluralism. A third option is the creation of truly novel ethical theory that reflects the best behavioral and brain science.
“Enhancing for Virtue? Towards Holistic Moral Enhancement” William Kabasenche, Philosophy, Washington State University
To what extent are virtues constitutive of human flourishing, and if they are, can we use biomedical means to engineer virtues into people? In an Aristotelian account simply acting morally isn’t enough, we have to actually feel morally, so that our action is an untroubled reflection of our character. Actions without the appropriate emotional state aren’t as moral. Others emphasize that there is also a rational component to Aristotelian virtue, that we also need to be acting for the right reasons. None of the proposed modalities of chemical moral enhancement enhance this fully construed model of virtue. Increasing trust with oxytocin doesn’t provide the discriminating intelligence to not trust when we shouldn’t. But the relationship of blood sugar to self-control illustrates that biological enhancement could be complementary to a more holistic program of moral formation. Authenticity doesn’t help us here since none of us become virtuous “authentically,” that is through completely autonomous self-determination; we all develop morality in the context of social and parental nurturance and pressures. Specific deficits could be addressed by moral therapies, and everybody might benefit morally from some enhancements. However, we would still need to engage in traditional moral formation.
“Moral Enhancement? Evidence and Challenges” Molly Crockett, Economics, University of Zurich
Crockett is skeptical of the prospect of a “morality pill” because of the complexity of the brain and neurochemistry. Oxytocin for instance increases empathy and generosity, but it also increases feelings of ethnocentrism, envy, gloating and shadenfraude. Oxytocin is also involved in brain plasticity, memory, stress, arousal etc. Similarly, serotonin does bias moral decision-making to be less willing to harm individuals, but its not clear whether the bias is more or less moral; utilitarians think we need to be willing to harm some individuals when it leads to harm for fewer individuals. Serotonin is also involved in many other moods, behaviors and brain systems. Therapies wouldn’t be really “moral enhancement” if they make people more trusting or empathic regardless of context. Plus, we would need interventions that are far more targeted to specific cognitions and emotions than the ones we’ve been studying. On the other hand, if pharmacological enhancement is combined with non-pharmacological enhancement, such as with meditation of by more effective ways of educating people to change their beliefs, they might be more plausible.
“The Illusion of a Technological Moral Fix” Wendell Wallach, Scholar & Lecturer, Interdisciplinary Center for Bioethics, Yale University, and IEET Fellow
Wallach wants to know how we’re going to navigate emerging neuro-technology. He points out that enthusiasts see it as a natural evolution in human progress, whereas opponents regard it as a “devolution.” Amusingly, he begins his lecture with a film clip of a homo habilis in 2001: A Space Odyssey ecstatic with his discovery that he can violently smash things apart by wielding a thigh-bone as a weapon. The point Wallach makes with this image is that technology is always emerging for human usage, but we need to determine if it’s used for creation or destruction. Following this, he notes the tremendous technology that’s emerged in the last 200 years, with daily life for human beings radically changing due to the industrial and than sanitation revolutions.
Defining himself as “Your Friendly Skeptic,” Wallach decried the fact that specialists occasionally view humans as “flawed machines” - due to frustration in our weak will and our hard-to-change nature. He questioned the notion of us having a true “moral compass,” and he introduced a drug that hadn’t been discussed yet in the conference: Propranonol. This pharmaceutical, he informed us, has the ability to affect memory encoding. Studies suggest that it can reduce racial bias. Additionally, there’s the possibility that it can reduce guilt, but Wallach wrying wonders, “is that positive or negative?”
Special Thanks to James Hughes, who provided me with many notes, and contributed entire sections, perhaps 30% of the content, to this essay