IEET > Rights > HealthLongevity > CognitiveLiberty > FreeThought > CyborgBuddha > Vision > Affiliate Scholar > John Danaher > Enablement > Philosophy > Technoprogressivism > Innovation
Moral Enhancement and Superficiality: Compassion-Pills (pt2)
John Danaher   Jul 15, 2013   Ethical Technology  

As you may have observed, I’m repeatedly drawn to the enhancement debate. I can’t exactly say why. Prima facie, it doesn’t seem particularly interesting (from an intellectual perspective): after all, who could object to “enhancement”? But, of course, it’s more complicated than that. Indeed, one of the alluring aspects of the debate has to do with the terminology in which it is couched.

(Part 1)

This terminology is quite intricate, requiring a good deal of precision and care to master, but also quite deceptive, requiring a good deal of scepticism and critical reflection to overcome. One of the ways in which it is deceptive is that it employs novel, sometimes misleading, vocabulary for different sets of arguments. These distinct vocabularies can mask commonalities between the arguments and thereby hinder sound critical engagement.

The superficiality concern about moral enhancement, which is the subject matter of this series of posts, provides a good example of this phenomenon. Ostensibly, the concern is that improving our moral behaviour -- or rather increasing our moral "conformity" -- through the use of enhancement technologies leads to a superficial, less morally worthy, kind of behaviour when compared to other methods of moral improvement. While this argument comes with its own vocabulary (outlined in detail in part one), the vocabulary is put to work to describe and defend a classic, all-too-familiar type of moral argument.

I speak here of the "means-end" critique. This critique forms the basis of the consequentialism-deontology debate, with the former view holding that consequences are what matters when it comes to the moral evaluation of conduct (i.e. ends matter most); the latter holding that intrinsic properties of the conduct are what matter (i.e. means matter most). It also forms the basis of the authenticity objections to enhancement in other domains. Thus, one often hears repeated the charge that achievements or victories won through the use of performance enhancing drugs are "inauthentic", and thereby worthy of little, if any, of our respect. The commonalities between these kinds of critiques fascinate me because they crop up so often.

The means-end critique at the heart of the moral worth objection is conspicuous in the argument outlined at the end of the last post. As we saw, that argument (the “moral worth” argument) holds that: (a) if moral conformity is achieved through certain means (namely: if it bypasses our deliberative and reflective cognitive processes) then it is less worthy of moral praise than if it is achieved through the active use of our deliberative and reflective processes; and (b) moral enhancement technologies do indeed bypass these processes. In other words, it holds that the means matter when it comes to the moral worth of our conduct.

What we need to find out now is whether (a) and (b) are true. We can do that by formalising and evaluating four specific arguments, all of which are addressed by Douglas in his article. Today, we start things off by considering the motive of duty argument, and the causal history argument.

1. The Motive of Duty Argument
The classic Kantian view is that moral action must proceed from the right motives. In order for an agent to act "morally" he must act out of a motive of duty. In other words, when he donates money to charity, or when he donates blood to the sick, he must do so out of the belief that such conduct is morally required. As Kant himself might put it, the act cannot just conform to the moral law, it must be done for the sake of the moral law too.

These Kantian thoughts provide us with the basis for our first argument, which runs something like this (numbering is following on from part one):
 

  • (5) In order for action to warrant moral praise, it must be done from the motive of duty (i.e. the agent must conform with the demands of morality for the sake of morality).
  • (6) Morally conforming actions produced through moral enhancement technologies are not performed from the motive of duty.
  • (7) Therefore, morally conforming actions produced through moral enhancement technologies do not warrant moral praise.


The argument is valid, and we can probably grant the Kantian the first premise (5). The problem rests with premise (6). Why should we think that moral enhancement technologies prevent us from acting out of a motive of duty?

The reason goes back to our discussion of brute moral conformity in part one. As you'll recall, an agent achieves brute moral conformity whenever their actions conform with the available moral reasons, but they do not deliberate or reflect upon those reasons. The claim is that moral enhancement technologies will work largely by achieving brute moral conformity. Imagine the following case (from Douglas's article):
 

Andrew the Racist: Andrew is a doctor working in a multi-racial community. Unfortunately, Andrew was raised in a racist household and has a deep-seated emotional aversion to treating non-white patients. Andrew knows that this aversion is immoral, but can't seem to rid himself of it. That is, until he learns of a revolutionary new technology. By undergoing brain-scanning, doctors can isolate the "racist circuit" in his brain, and disrupt it through the use of transcranial magnetic stimulation. This will rid him of his racist prejudice, and allow him to conform with the demands of morality. He uses the technology and overcomes his aversion.


The suggestion could be that disrupting the “racist circuit” is a direct manipulation of Andrew’s affective states, not a change in his moral reasoning; that removing the emotional prejudice may make him more willing to perform the moral act, but it doesn’t change the reasons from which he acts. In other words, it doesn’t suddenly make him act from the motive of duty; it just removes an affective barrier he once had.

But why should we think that this wouldn’t allow him to act for morality’s sake? True; if he just wants to treat non-white patients for self-serving reasons (e.g. more money), then removing the affective barrier will not suddenly make him more act from the motive of duty. But if he already knows what morality demands, and would really like to act from the motive of duty, then removing the affective barrier could be exactly what he needs to meet the Kantian conditions for morally praiseworthy action. Indeed, we stipulated this in our description of the case. We said that Andrew knew his emotional aversion to non-white patients was immoral, and that he wanted to overcome it. So removing the emotional barrier actually freed him to act from the motive duty.

More generally, moral enhancement technologies that directly target the emotions need not necessarily lead to automatic, unreflective or unreasoned behaviour. They may simply remove the barriers that otherwise prevent deliberative, reflective and reasoned behaviour. Depending on the moral character of the agent, this could in turn allow them to act from a motive of duty. This rebuts premise (6):

  • (8) Moral enhancement technologies could work by eliminating the barriers that prevent people from acting out of the motive of duty, not by prompting unreasoned morally conforming behaviour.

 

To be sure, this isn’t a ringing endorsement of moral enhancement. But it does two important things. First, it diffuses the motive of duty argument to at least some extent. And second, it gives us a reason to prefer enhancement technologies that work in this way over ones that do prompt unreasoned moral conformity. That could be a guiding principle for the enhancement project.

2. The Causal History Argument
The motive of duty argument works because the proximate cause of an agent’s behaviour could be the motive of duty, even if the distal cause is something else. This was the fatal weakness in the argument. A stronger argument would put further restrictions on the causal history of an agent’s behaviour. It would claim that the more distal stretches of that causal history must exemplify properties that are disrupted by the use of moral enhancement technologies.

How might this work? Two suggestions here. First, one could argue that an action warrants less moral praise if the motivating reasons do not originate in the agent. Second, one could argue that an action warrants less moral praise if it is not deliberative “all the way down”. The second condition makes more sense once we critically evaluate the first. Before we do that, however, let’s formalise the argument:
 

  • (9) In order for an action to warrant moral praise (or, in order for it to warrant a greater degree of moral praise) it must either: (a) be guided by motivating reasons that originate in the agent; or (b) be guided by deliberative processes all the way down.
  • (10) Morally conforming actions produced through the use of enhancement technologies are neither: (a) guided by motivating reasons that originate in the agent; or (b) guided by processes that are deliberative all the way down.
  • (11) Therefore, morally conforming actions produced through the use of enhancement technologies warrant less or no moral praise.


Let’s take both suggested conditions in turn. (Note: this argument, and its analysis, has several affinities with the tracing and manipulation arguments that are found in the philosophy of responsibility — remember: always bear in mind the possible commonalities between different sets of arguments).

The first problem with condition (a) is that it is not clear that motivating reasons ever truly originate in the agent. It all depends on what we mean by “origination”. Certainly, if determinism is true, then ultimate origination is not a possibility: the agent cannot be a prime mover of his or her actions. But if we assume a more relaxed, compatibilistic account of origination, then it’s not clear that enhancement is that much of a problem. Take Andrew the Racist once again. He decides that he wants to rid himself of his emotional aversion to non-white people; he undergoes a course of TMS; this rids him of his aversion; and this allows him to conform with the demands of morality. Why can’t we say that the motivating reasons originated within him? After all, he made the original decision to make use of the enhancement. At best, this argument gives us reason to avoid the coerced use of moral enhancement, but we have reason to avoid coercion anyway.
 

  • (12) If an agent makes a decision to use moral enhancement technology in order to increase their moral conformity, then the motivating reasons for their conforming actions do originate in them.


That brings us to condition (b). This condition avoids the objection just mounted by stipulating that moral decision-making must be deliberative all the way down. In other words, it says that you shouldn’t have technological interventions in the midst of the causal-psychological history of an act. Such interventions would bypass deliberative processes for at least one link in the causal chain and thereby attract less moral praise.

But is this a plausible condition of moral praise? Is it plausible to say that Andrew’s conduct attracts less moral praise than it might otherwise have done because he did not use purely deliberative methods of moral improvement all the way down? Maybe. Maybe we could say that Andrew’s behaviour is less praiseworthy because he uses a quick fix for achieving moral conformity. The deliberative pathway may be more arduous, but it is all the more praiseworthy as a result. This brings us to the moral effort argument, which is actually worth discussing at some length. We’ll do that the next day, and wrap up the analysis of the moral worth argument by considering a fourth argument as well: the unreliability argument. Stay tuned. 

(This post is part of series on Thomas Douglas’s article “Enhancing Moral Conformity and Enhancing Moral Worth”)

John Danaher holds a PhD from University College Cork (Ireland) and is currently a lecturer in law at NUI Galway (Ireland). His research interests are eclectic, ranging broadly from philosophy of religion to legal theory, with particular interests in human enhancement and neuroethics. John blogs at http://philosophicaldisquisitions.blogspot.com/. You can follow him on twitter @JohnDanaher.



COMMENTS No comments

YOUR COMMENT Login or Register to post a comment.

Next entry: Time Lost: Scene 1

Previous entry: Ocean Fertilization, Geoengineering, and the Politics of Science