Institute for Ethics and Emerging Technologies

The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.

Search the IEET
Subscribe and Contribute to:

Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view

whats new at ieet

What you need to know about CRISPR

Brain Implant Allows Paralyzed Man to Feel Objects With a Prosthetic Limb

Technology hasn’t changed love. Here’s why

Why Non-Natural Moral Realism is Better than Divine Command Theory

IEET Affiliate Scholar Steve Fuller Publishes New Article in The Telegraph on AI

Can we build AI without losing control over it?

ieet books

Philosophical Ethics: Theory and Practice
John G Messerly


rms on 'Can we build AI without losing control over it?' (Oct 24, 2016)

spud100 on 'For the unexpected innovations, look where you'd rather not' (Oct 22, 2016)

spud100 on 'Have you ever inspired the greatest villain in history? I did, apparently' (Oct 22, 2016)

RJP8915 on 'Brexit for Transhumanists: A Parable for Getting What You Wish For' (Oct 21, 2016)

instamatic on 'What democracy’s future shouldn’t be' (Oct 20, 2016)

instamatic on 'Is the internet killing democracy?' (Oct 17, 2016)

RJP8915 on 'The Ethics of a Simulated Universe' (Oct 17, 2016)

Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List


Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month

Here’s Why The IoT Is Already Bigger Than You Realize
Sep 26, 2016
(6004) Hits
(1) Comments

IEET Fellow Stefan Sorgner to discuss most recent monograph with theologian Prof. Friedrich Graf
Oct 3, 2016
(4130) Hits
(0) Comments

Space Exploration, Alien Life, and the Future of Humanity
Oct 4, 2016
(4027) Hits
(1) Comments

Blockchain Fintech: Programmable Risk and Securities as a Service
Oct 22, 2016
(3750) Hits
(0) Comments

IEET > Rights > Neuroethics > FreeThought > Vision > CyborgBuddha > Staff > Kyle Munkittrick

Print Email permalink (5) Comments (8102) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

Mood Manipulation is not Mind Control

Kyle Munkittrick
By Kyle Munkittrick
Discover: Science Not Fiction

Posted: Apr 8, 2011

Do Androids Dream of Electric Sheep? (Blade Runner‘s dead-tree forebear) opens with Deckard arguing with his wife about whether or not to alter her crummy attitude with the “mood organ.” She could, if she so desired, dial her mood so that she was happy and content.

Philip K. Dick worried that the ability to alter our mood would remove the authenticity and immediacy of our emotions. Annalee Newitz at io9 seems to be worried that mood manipulations will enable a form of social control.

The worry comes from recent developments in neuro-pharmaceuticals. Drugs are already on the market that allow for mood manipulation. The Guardian‘s Amelia Hill notes that drugs like Prozac and chemicals like oxytocin have the ability to make some people calmer, more empathetic, and more altruistic. Calm, empathetic, and altruistic people are far more likely to act morally than anxious, callous, and selfish people. But does that mean mood manipulation going to let us force people to be moral? And if it does, is that a good thing? Is it moral to force people to be moral?

The question is a strange one. Force people to be moral – what does that even mean? Let’s cast some clarity onto the issue of moral enhancement:

The field is in its infancy, but “it’s very far from being science fiction”, said Dr Guy Kahane, deputy director of the Oxford Centre for Neuroethics and a Wellcome Trust biomedical ethics award winner.

“Science has ignored the question of moral improvement so far, but it is now becoming a big debate,” he said. “There is already a growing body of research you can describe in these terms. Studies show that certain drugs affect the ways people respond to moral dilemmas by increasing their sense of empathy, group affiliation and by reducing aggression.”

That last sentence is a critical one, so I’m going to disassemble it…


Kyle Munkittrick, IEET Program Director: Envisioning the Future, is a recent graduate of New York University, where he received his Master's in bioethics and critical theory.
Nicole Sallak Anderson is a Computer Science graduate from Purdue University. She developed encryption and network security software, which inspired the eHuman Trilogy—both eHuman Dawn and eHuman Deception are available at Amazon, the third installment is expected in early 2016. She is a member of the advisory board for the Lifeboat Foundation and the Institute for Ethics and Emerging Technologies.
Print Email permalink (5) Comments (8103) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


I understand your making the distinction between ‘determining’ and ‘enabling’, but it is too binary a way to look at it I think.

A drug that strongly ‘enables’ may as well ‘determine’, because at that point the outcome is more or less the same. Yes there is still the option of acting differently, but the likelihood of that occurring diminishes as strength increases. The ‘likelihood’ here is an important aspect. Personality is best understood (at least crudely) in probabilistic terms, I think, so we should think about mood enhancers this way as well.

It’s still not mind control, but ‘strength’ should play a role in determining moral analysis here.

“Is it moral to force people to be moral?”

In short, no. It violates the non-aggression principle.

I wanted to clarify and expand on my point.

Let’s imagine the following scenario:

Group A develops a “morality pill”. Then decides that it would be prudent to use it on Group B in order to improve their morality (let’s say Group B is any commonly accepted group of immoral - these could be murderers, politicians, bankers, whatever).

But first, Group A decides to administer this “morality pill” to themselves, in order to insure that the choice they are making is in fact a moral one.

What happens if in increasing their own morality through the treatment, they discover that the decision to “force” another group to undergo this treatment is in fact immoral?

In other words, I am not in principle opposed to the creation of these drugs, I think in fact they have a lot of promise, what I am opposed to is the use of them in any way that violates another person’s autonomy.

Perhaps I am someone who doesn’t need these pills.

On the other hand, there is another interesting alternative:

What if Group A treats themselves first, then discovers that it is in fact moral to force Group B to undergo treatment?

If we accept that the treatments works (the issue of whether or not other people even believe that it is in fact effective), and Group A determines that it is in fact moral to make the choice to require others to do it, then where do we stand as an onlooker who is part of neither Group A or B?


Do you have a response to Nikki Olson and iPan? I’d love to read your responses if you do. I’m sure the other readers would as well. Engaging the commenters in dialogue is an important part of defending one’s own work and keeping it interesting!


@Nikki: There really is no binary – none of the drugs I suggest determine a persons actions at all. They just create conditions for a person to act in a certain way, and those conditions are created and enabled to various degrees. Think about things you consume for wakefulness: sugar gives you a quick pep, caffeine lasts longer but causes jitters, an energy drink is just more of that to the same degree, and then chemicals like some amphetamines and Provigil use a different chemical pathway to suppress the need to sleep. Yet none of those drugs force you to be productive or active, but they allow you to over come fatigue if you want to be productive. Wakefulness is a condition of productivity, but wakefulness does not necessarily cause productivity. That’s the difference between determine and enable.

@iPan: An interesting question, and it relies on the implicit assumption that the “morality pill” improves ones ability to use moral reasoning. That is not what is argued with mood manipulation. Mood manipulators make it more likely that you will be altruistic, non-violent, and/or beneficent to those around you, because you are in a happy, warm, and empathetic mood. Mood manipulators do not improve your ability to determine, via logic and rational contemplation, whether or not something is actually moral. So, individuals in group A, B, and the onlooker all have (all other things being equal) the same ability to determine whether or not it is moral to force mood enhancement on to group B.

Your scenario is a guise for the question: “But what if the pill makes morality something we don’t like, who are we to critique that new morality if we aren’t enhanced?” The answer is that the pill doesn’t bring a person closer to moral truth, but makes it easier to take moral action. A person feeling empathetic and beneficent is much less likely to force anyone else to do anything.

YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Living With Robots

Previous entry: Human enhancement technologies are nothing new: It’s what humans do


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @