IEET > Technopolitics > Philosophy > Affiliate Scholar > John Danaher
New papers on Moral Enhancement and Brain-Based Lie Detection

I have a couple of new papers available online. The first looks at the moral freedom objection to moral enhancement. The second tries to rebut an interesting philosophical objection to the use of brain-based lie detection. Both papers are set to appear in edited books in 2018. Details and links to pre-publication versions below (just click on the paper title):

1. Moral Enhancement and Moral Freedom: A Critique of the Little Alex Problem

Abstract: A common objection to moral enhancement is that it would undermine our moral freedom and that this is a bad thing because moral freedom is a great good. Michael Hauskeller has defended this view on a couple of occasions using an arresting thought experiment called the 'Little Alex' problem. In this paper, I reconstruct the argument Hauskeller derives from this thought experiment and subject it to critical scrutiny. I claim that the argument ultimately fails because (a) it assumes that moral freedom is an intrinsic good when, in fact, it is more likely to be an axiological catalyst; and (b) there are reasons to think that moral enhancement does not undermine moral freedom.

2. Brain-based Lie Detection and the Mereological Fallacy: Reasons for Optimism

Abstract: There has been much hype about the implications of contemporary developments in neuroscience for the law. Pardo and Patterson are skeptical of this hype. They argue that a good deal of the hype stems from simple philosophical errors and conceptual confusions. In the course of this critique, they offer particular objections to the forensic use of brain-based lie detection methods. Although agreeing with the authors about the need for skepticism and conceptual clarity, this chapter argues that they get things wrong when it comes to their skepticism of brain-based lie detection. This is for three reasons. First, in their critique they focus too heavily on the problems associated with the more speculative and less empirically grounded fMRI-based methods, not enough on the more robustly grounded EEG-based methods. Second, when focus is switched to these methods, their main philosophical critique of the use of neuroscience in law – the neurolaw mereological fallacy – has much less bite. And third, they neglect to address the merits of brain-based lie detection methods relative to existing methods for inferring what a witness does or does not believe. When these three critiques are factored in, the future looks brighter for this particular use of neuroscience in law. 

John Danaher holds a PhD from University College Cork (Ireland) and is currently a lecturer in law at NUI Galway (Ireland). His research interests are eclectic, ranging broadly from philosophy of religion to legal theory, with particular interests in human enhancement and neuroethics. John blogs at http://philosophicaldisquisitions.blogspot.com/. You can follow him on twitter @JohnDanaher.



COMMENTS No comments

YOUR COMMENT Login or Register to post a comment.

Next entry: MENSCH & MASCHINE

Previous entry: Tesla (or Google) and the risk of massively distributed physical terrorist attacks