Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Big Data: The learning health-care system revolution

Sustainable Systems SFA 2.0 (Computer Simulation from the Genome to the Environment)

#18: The Future of Work and Death

Robotic Nation (3) Robotic Freedom

Robotic Nation (2) Robots in 2015

Robotic Nation (1)


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

Christian K. Nordtomme on 'Superintelligences Are Already Out There!' (Dec 21, 2014)

AmbassadorZot on '#22: Ray Kurzweil on Rationality and the Moral Considerability of Intelligent Machines' (Dec 20, 2014)

reddibrek on 'Currency Multiplicity: Social Economic Networks' (Dec 20, 2014)

pansi4 on 'The Slut Shaming, Sex-Negative Message in the Christmas Story: It's Worth a Family Conversation' (Dec 20, 2014)

Vigrith on 'Technoprogressive Declaration - Transvision 2014' (Dec 20, 2014)

ericscoles on 'The small and surprisingly dangerous detail the police track about you' (Dec 20, 2014)

instamatic on 'Wage Slavery and Sweatshops as Free Enterprise?' (Dec 19, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


2014 Was a Good Year: Better Than You Remember
Dec 22, 2014
(17445) Hits
(0) Comments

Review of Michio Kaku’s, Visions: How Science Will Revolutionize the 21st Century
Dec 15, 2014
(10148) Hits
(0) Comments

What Will Life Be Like Inside A Computer?
Dec 7, 2014
(8521) Hits
(0) Comments

Bitcoin and Science: DNA is the Original Decentralized System
Nov 24, 2014
(8291) Hits
(0) Comments



IEET > Life > Access > Enablement > Innovation > Implants > Health > Vision > Affiliate Scholar > John Danaher

Print Email permalink (0) Comments (2243) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Is human enhancement disenchanting? (Part Two)


John Danaher
By John Danaher
Ethical Technology

Posted: Feb 14, 2013

This is the second part in a brief series looking at whether human enhancement — understood as the use of scientific knowledge and technology to improve the human condition — would rob our lives of meaning and value. The focus is on David Owens’s article “Disenchantment”. The goal is to clarify the arguments presented by Owens, and to subject them to some critical scrutiny.

(Part One)

I started the last day by developing a basic argument in favour of enhancement. This was the Desire Fulfillment Argument (DFA). In essence, it held that enhancement is welcome because it helps us to get what we want. It does so by removing certain obstacles to desire fulfillment. After formalising this argument, I presented Owens’s main critique of it. Owens’s presented his critique in the form of a thought experiment — called the Pharmacy of the Future thought experiment — which depicted a hypothetical future in which every human desire, emotion and mood is manipulable through psychopharmacological drugs.

The suggestion at the end of this thought experiment was that the future being depicted was not welcoming. Indeed, that it would deprive us of the very things needed in order to live a fulfilling life. The question before us now is whether this is the right conclusion. I try to answer that question in the remainder of this post.

I do so in two main parts. First, I formalise the argument underlying the Pharmacy of the Future thought experiment. As you shall see, I present two different versions of the argument, the first focusing on the “confusing” nature of enhancement, the second on its ability to eliminate “fixed points of reference”. Though they are closely related in Owens’s analysis, they are also importantly different. Second, following this, I will offer some critical comments about the two versions of the argument.

1. The Disenchantment Argument
Since Owens’s article is officially titled “Disenchantment”, I have decided to call the formal version of his anti-enhancement argument “The Disenchantment Argument”, however this title is apt to raise questions because there is no serious attempt to offer a formal definition of “disenchantment” is Owens’s article, nor is there an attempt to identify the necessary and sufficient conditions for experiencing disenchantment. Clearly, the term is intended to refer to the reduction or elimination of valuableness, worthwhileness or meaningfulness in life, but beyond that further detail is lacking. Nevertheless, the title seems appropriate (despite its vagueness) and we can get some sense of what disenchantment (in the specific context of Owens’s article) means by reading between the lines.

This begins by trying to reconstruct the logic of the argument underlying the Pharmacy of the Future. Owens says certain things by way of summing up the significance of his thought experiment, and I reproduce some of the key passages here (page references are to the online version of the article, not the published version). From these passages we should be able to reconstruct the argument.

We start with this comment:
 

You thought you had made a decision and that science would enable you to implement that decision. Instead, science seems to be putting obstacles in the way of your taking any decision at all. The pharmacist is giving you too much choice, more choice than you can think of any grounds for making. By insisting that you take nothing as given, that you regard every aspect of your character as mutable, as subject to your will, the pharmacist puts you in an irresolvable quandary. You can’t handle such total control. (p. 13)


In this passage, Owens’s is decrying the confusion stoked by enhancement technologies: if we can adjust every aspect of our characters, we will become confused and unsure about what to do. The effect will be disorienting, not desire-fulfilling. Underlying this is the notion that the removal of fixed reference points is, in some sense, a bad thing. This idea is more clearly expressed in a later passage:
 

My worry is not that a successful science of the human mind will deprive us of the ability to take decisions by subjecting us to the immutable facts of our nature and situation [determinism], but rather that it threatens to remove the fixed points that are needed to make decision making possible at all. I feel not constrained but vertiginous. In a purely scientific picture of man, there is no obstacle to indefinite transformation of both self and environment. (p. 14)


Here the concern is not primarily with the possibility of confusion, but with the removal of standards of fixed points of reference. Earlier in the article (p 13 again), Owens links this directly to the fixity of character. In other words, he says the problem with the science of enhancement is that by making everything manipulable, it will gnaw away at, and gradually erode, your sense of self. And since your sense of self is a fixed point of reference, one which you refer to in making choices about your life, anything that undermines it can’t be good for you. This is for the very simple reason that it removes “you” from the picture.

Although the concepts and concerns present in both of the quoted passages are very similar, they do suggest to me two distinct versions of the disenchantment argument. The first works from the claim that enhancement sows the seeds of confusion, offering us a bewildering array of choice. The second works from the claim that enhancement removes the fixed points that are necessary for meaningful decision-making. Hopefully, my reasons for treating these as distinct arguments will become clear in due course. For now, let me present more formal versions of the arguments, starting with the confusion version:
(6) In order for us to derive meaning and value from our decisions, we must actually know what it is we want to achieve from those decisions; we must not be confused about what we want.

  • (7) Enhancement renders every aspect of our character (our desires, moods, abilities, etc.) contingent and manipulable. 
  • (8) If every aspect of our character is contingent and manipulable, then we will no longer know what we want to achieve from our decisions, i.e. we will be confused.
  • (9) Therefore, enhancement confuses us. (7, 8) 
  • (10) Therefore, enhancement robs us of meaning and value. (6, 9)

The second version of the argument follows this exact pattern, merely replacing premises (6) and (8) with the following analogues (and then modifying (9) accordingly):

  • (6*) In order for us to derive meaning and value from our decisions, we must have some fixed points of reference, i.e. standards that we hold constant in deciding what to do.
  • (8*) If every aspect of our character is contingent and manipulable, then we will no longer have fixed points of reference which act as standards to guide our decision making.
These formalisations are my own interpolation into the text of Owens’s article, and those who have read the article may think I’m being unfair to him. If so, I would be happy to hear suggestions for changing the argument. Nevertheless, I think I’ve been reasonably generous to what he has said. Contrariwise, those who have not read the article, and look now at my formalisations may think the arguments are pretty odd (or weak). Certainly, I can imagine a flood of objections occurring to someone encountering them for the first time. In the final section of this post, I want to sketch out some of the possible objections one could mount. My discussion is a preliminary exploration of the counter-arguments, part of an ongoing project analysing the arguments of those who think enhancement would make life less worthwhile, and so I would welcome suggestions on other possible responses to these arguments (or, indeed, defences of these arguments).

2. General Comments and Criticisms
Allow me start by considering the one premise that is common to both versions of the argument. This is premise (7). You may think it a vast overstatement to claim that “every” aspect our characters will become manipulable, but it does seem to fairly capture what Owens actually says about the prospects for a completed science and technology of human enhancement. You can see this in the above-quoted passages when he talks about “every aspect of your character” being regarded as mutable, and when he says that “in a purely scientific picture of man, there is no obstacle to indefinite transformation of both self and environment.”

Fair though it may be, it offers an obvious point of contestation. Certainly, if our concern is with current enhancement technologies, or even with enhancement technologies that are likely within the 10-20 years, I think the premise is probably false. Not every aspect of our characters is mutable with current technologies — far from it — and may not be anytime soon. There could well be hard limits to manipulability that we will run into in the future.

Interestingly, though premise (7) is a purely factual claim, I don’t think objecting to it only undermines the factual basis Owens’s argument. I think it strikes to its principled and normative core. You see, the thing is, Owens himself concedes that some manipulability of character and desire is acceptable, perhaps even welcome. For instance, in his article he discusses cases of people trying to manipulate their desires in order to overcome weakness of the will or to change habits. In these cases, Owens seems to suggest that the manipulation doesn’t have a disenchanting effect. But if that is true, it seems to set up an easy analogy for the defender of enhancement: if it is desirable in those cases, why isn’t it desirable through the use of more advanced enhancement technologies? In order to block this analogy, Owens needs to claim that the kind of manipulability that would be involved with such technologies is radically distinct from those currently used in overcoming akrasia. If he can’t make that claim, then his argument loses a good deal of its attraction.

But let’s set objections to premise (7) to one side for now, and focus on the two different versions of the argument, starting with the confusion version. First, we need to ask ourselves if the motivating premise is true. In other words, is it true that being confused somehow deprives us of meaning and value? Well, once again, we run into the question of how radical or extreme the confusion is likely to be. I think it’s pretty clear that occasional confusion is tolerable — unless we’re claiming that life as it is currently lived is meaningless and devoid of value, which doesn’t seem to be Owens’s claim, and which would get us into a different set of arguments. So, only if there is radical or extreme confusion, will this motivating premise hold true.

But even then there is the bigger issue of whether premise (8) is true. I have to confess, the causal connection between enhancement and radical confusion is, to me at any rate, somewhat nebulous. In the Pharmacy of the Future case, the seeds of confusion of are sown by the doctor presenting the patient with more and more choices about how to resolve their romantic difficulties. But this seems psychologically implausible to me. If one entered the doctor’s clinic with the desire to maintain one’s relationship, the mere fact that there were other drugs that could resolve the problem in a different way wouldn’t necessarily lead to confusion. The primary motivating desire would allow you to automatically filter out or ignore certain options. A different example might make this objection more compelling. If someone told me that there was a drug that would give me a tremendous desire for chocolate milk (or any other substance), I wouldn’t suddenly become confused. To create confusion, the choices would have to work with some conflicting set of underlying desires, but those won’t always (or even very often) be present. Many potential manipulations of our character will simply seem undesirable or uninteresting and so we won’t be confused by them.

Responding to the other version of the disenchantment argument would follow along similar lines. Again, the causal connection between radical enhancement and the actual elimination of fixed reference points would be questionable on grounds of human psychology. But there is stronger point to be made in relation to that argument as well. For even if enhancement did remove fixed reference points of character, there may remain other fixed reference points that compensate for this effect. I think in particular of ethical or moral reference points, which would seem to be largely unaffected by enhancement. Indeed, one could make an argument in favour of enhancement on this very ground. Oftentimes, what prevents us from achieving the moral ideal in our behaviour are the fixed vices within our characters. If enhancement could help us to overcome those fixed vices, then where is the problem?

Now, I concede I may understate the likely effect of radical enhancement. It could be that enhancement would have an effect on ethical standards too (e.g. by making people instantly forget painful experiences). But I still think this is a promising line of response.

3. Conclusion
To sum up, many people are ambivalent at the prospects of human enhancement. In his article, Owens traces his own ambivalence to the potentially disenchanting effects of enhancement, which he conceives of in terms of confusion and the removal of fixed points of reference that are necessary for decision-making. Owens presents his case through an imaginative thought experiment involving a hypothetical “Pharmacy of the Future”, which allows us to manipulate our desires and moods through a variety of drugs.

Interesting though it is, I have suggested in this post that there are certain problems with Owens’s underlying argument. For one thing, he may vastly overstate the likely effect of human enhancement. And for another, whatever about his motivating premises, his alleged causal links between radical enhancement and disenchantment can be called into question.


John Danaher holds a PhD from University College Cork (Ireland) and is currently a lecturer in law at NUI Galway (Ireland). His research interests are eclectic, ranging broadly from philosophy of religion to legal theory, with particular interests in human enhancement and neuroethics. John blogs at http://philosophicaldisquisitions.blogspot.com/. You can follow him on twitter @JohnDanaher.
Print Email permalink (0) Comments (2244) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Dare to educate Afghan girls

Previous entry: Top 10 Most Promising Technology Trends 2013, from the World Economic Forum

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376