Blog | Events | Multimedia | About | Purpose | Programs | Publications | Staff | Contact | Join   
     Login      Register    

Support the IEET

The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.

Search the IEET
Subscribe to: Monthly newsletter Daily news feed Blog feeds Twitter IEET Wiki

Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view

whats new at ieet

Dvorsky, Bostrom @ Moogfest 2014

Does radical enhancement threaten our sense of self?

Yeb Saño: We are at war with climate change and hunger

How a public-private healthcare partnership threatens to bankrupt Lesotho

Hughes @ Translational Bodies: Ethical, Legal and Social Issues

Study Gerontology! This Frontier Provides Hope for the Future

ieet books

Between Ape and Artilect: Conversations with Pioneers of AGI and Other Transformative Technologies
by Ben Goertzel ed.


Frank Glover on 'Study Gerontology! This Frontier Provides Hope for the Future' (Apr 21, 2014)

instamatic on 'Social Futurist revolution & the Zero State' (Apr 20, 2014)

rmk948 on 'War Is Good for Us, Dumb New Book Claims' (Apr 20, 2014)

Peter Wicks on 'Social Futurist revolution & the Zero State' (Apr 20, 2014)

instamatic on 'Is the US an Oligarchy? Not So Fast.' (Apr 19, 2014)

instamatic on 'Social Futurist revolution & the Zero State' (Apr 19, 2014)

rmk948 on 'Is the US an Oligarchy? Not So Fast.' (Apr 19, 2014)

Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List


Sex Work, Technological Unemployment and the Basic Income Guarantee

Technological Unemployment but Still a Lot of Work…

Technological Growth and Unemployment:  A Global Scenario Analysis

Hottest Articles of the Last Month

The Singularity Is Further Than It Appears
Mar 27, 2014
(15350) Hits
(8) Comments

Future of love and sex: monogamy no longer the default, say experts
Mar 30, 2014
(12284) Hits
(3) Comments

Will sex workers be replaced by robots? (A Precis)
Apr 18, 2014
(9563) Hits
(0) Comments

Quest for immortality spurs breakthroughs in human-machine merge
Apr 6, 2014
(6575) Hits
(1) Comments

IEET > Rights > Neuroethics > Personhood > PostGender > Life > Innovation > Vision > Psychology > Philosophy > Futurism > Virtuality > Affiliate Scholar > John Danaher

Print Email permalink (6) Comments (6639) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

The Ethics of Robot Sex

John Danaher
John Danaher
Philosophical Disquisitions

Posted: Oct 14, 2013

Human beings have long performed sexual acts with artifacts. Ancient religious rituals oftentimes involved the performance of sexual acts with statues, and down through the ages a vast array of devices for sexual stimulation and gratification have been created. Little wonder then that a perennial goal among roboticists and AI experts has been the creation of sex robots (“sexbots”): robots from whom we can receive sexual gratification, and with whom we may even be able achieve an emotional connection.

But is this something we should welcome? Or is it deeply worrying? David Levy has been at the forefront of research on this question, most notably in his book Love and Sex with Robots: The Evolution of Human-Robot Relationships. In this post, I want to take a look at some of the arguments he makes. Although I will use his book as a reference point, I will structure my discussion more around his article “The Ethics of Robot Prostitutes”, which appears in the edited collection Robot Ethics.

Levy makes two arguments in this piece. The first is a predictive argument, which holds that people will, as a matter of fact, have sex with robots in the future. The second is an ethical argument, which holds that there is nothing deeply ethically objectionable about sex.

For what it’s worth, I tend to agree with Levy on the first argument, if only because people have already demonstrated their willingness to have sex with artifacts. On the second issue, I think Levy’s discussion is not as careful as it could be and I hope to rectify that. I think one needs to distinguish between the intrinsic and extrinsic ethical aspects of robot sex and treat them separately. This is because while there is probably nothing intrinsically wrong with having sex with robots, it may be extrinsically problematic. That said, I’m fairly agnostic about this issue because it requires us to predict the likely effects of sexbot usage and I’m not sure that we are well-placed to do that.

I’ll divide my discussion into four parts. First, I’ll look at Levy’s predictive argument, suggesting one plausible criticism of it. Second, I’ll look at the intrinsic ethics of robot sex. Third, I’ll look at the extrinsic ethics of robot sex. Finally, and largely in jest, I’ll look at the most plausible ethical objection to robot sex: the Futurama argument.

1. Will people have sex with robots?
Levy’s predictive argument is based on two main limbs. The first limb aims to show that the primary motivations for having sex with other human beings will transfer over to robots. The second limb aims to show, more narrowly, that the primary motivations for having sex with prostitutes will transfer over to robots.

You may well wonder why the second limb is needed. Surely if the reasons for having sex with humans will transfer over to robots, we have a decent predictive argument? But I think it is relatively easy to see what the problem is. Even if it is true that the primary motivations for having sex with other human beings will transfer over to robots, there is still the question of why people would opt for robots in preference to (or in addition to) ordinary human partners. The answer to this question can partly be resolved by looking at why people opt for prostitutes in preference to (or in addition to) “ordinary” human partners.

So let’s look at the first limb of the argument: why do people have sex with other humans? Levy discusses this issue at length in his book (but not in the article). He mentions three studies. The first coming from Barbara Leigh, the second from Valerie Hoffman and Ralph Bolton, and the third from Deborah Davis and colleagues. For some reason he doesn’t provide references for these studies; luckily, I was able to track them down and have linked to them in the text.

In any event, there is nothing especially dramatic about the findings from these studies. Each of them found that pleasure and emotional connectivity were the primary motivations for having sex (and, interestingly, that procreation ranked pretty low). Some of the other stated reasons could be significant in different contexts, but those two are the relevant ones for now because Levy’s argument, as you might guess, is simply that robot sex could satisfy both of them. Obviously enough, having sex with a robot could be pleasurable and it could help people obtain gratification and sexual release. Emotional connectivity is a trickier prospect, but Levy claims that sophisticated robots could respond with at least the facade of emotionality, and, furthermore, that people do become emotionally attached to robots (the first half of his book is pretty good on this point).

That brings us to the second limb of the argument: why would people have sex with robots in preference to (or in addition to) having sex with human beings? The evidence on why people have sex with prostitutes is thought to be revealing. Now, there is quite a bit of evidence on this topic (though most of it focuses on male-female interactions), so I’ll just list some of the relevant sources first before going into the actual reasons. Some of the sources are: McKeganey and Barnard 1996; Xantidis and McCabe, 2000; Monto 2001; Bernstein 2007; and Sanders 2008.

Moving onto the actual reasons, Levy breaks these down into three main categories (with a fourth being hinted at in his book): (i) the myth of mutuality - i.e. people have sex with prostitutes in order to secure some kind of emotional connection; (ii) variety - i.e. people have sex with prostitutes because they are willing to engage in sex acts that “ordinary” human partners are not (though this can change as sexual norms among the general population change); (iii) lack of complications and constraints; and (iv) lack of sexual success in “normal” life.

Once again, Levy’s claim is that all four reasons could be satisfied by sexbots. Indeed, one thing that sexbots may be better at than human prostitutes is cultivating the “fake” emotional connection. After all, robots could be programmed to dote upon, or even to fall in love with their owners, thus creating a connection that is more substantive than that found in typical prostitute-client relationships. Contrariwise, they could be programmed not to, if that is what the owner would prefer (lack of complications and constraints). Furthermore, variety and willingness to have sex with those who are otherwise sexually unsuccessful, should not be problem for robots.

That gives us Levy’s predictive argument:

  • (1) If the motivations for having sex with ordinary human partners and prostitutes would carry over to sexbots, then people are highly likely to have sex with robots.
  • (2) The motivations for having sex with ordinary human partners and prostitutes would carry over to sexbots.
  • (3) Therefore, people are highly likely to have sex with robots.

As I said in the introduction, I’m inclined to agree with this predictive argument, partly for the reasons given by Levy but also partly because people have already demonstrated a willingness to have sex with artifacts. Still, there is at least one countervailing consideration to bear in mind: “the uncanny valley” effect. First mentioned by Masahiro Mori, the “uncanny valley” is the name given to the apparent revulsion people experience when they see an object that is almost, but not quite, human-like in appearance and function. A now-classic illustration of the phenomenon comes from Robert Zemeckis’s 2004 film The Polar Express. In that film, human actors were used to create computer simulations that were very close to being perfectly human-like in appearance. The result was that many viewers were uneasy about, and even slightly horrified by, the characters on screen. Having seen bits of the film myself (but never the whole thing) I can report a similar feeling of eerieness. (Note: I took this objection from Blay Whitby's article, which also appears in the Robot Ethics collection).

If the uncanny valley is a robust phenomenon — and it’s not at all clear to me that it is — then it might block the path to robot sex. The claim would be that as robots get more and more human-like in appearance and function, a point will be reached at which humans begin to experience severe revulsion toward them. This should make them less willing to have sex with them. It has to be noted, however, that the uncanny valley is just that: a dip in likeability before more complete human-likeness is reached. It may be little more than a speedbump on the road to rampant robot sex.

One final point that is worth mentioning is that one of the factors that might hasten the development and use of sexbots is a prohibitive attitude toward human prostitution. Levy gives the example of South Korean hotels that rent out “love dolls” to their patrons, largely because human sex work is prohibited in that country. This suggests that robot sex might be viewed as a viable alternative to human sex work in countries that prohibit human prostitution. That said, I’m not sure how credible this is given that human prostitution thrives in many countries with prohibitive laws.

2. Intrinsic Ethical Objections to Robot Sex
If we accept that increasingly sophisticated sexbots will be developed, and that people are likely to avail of them, then the question turns to the ethical. Is this a trend that should be welcomed, prevented or treated with indifference? Habitually, I tend toward indifference on questions like this, but let’s see if there are any objections to robot sex that ought to shake me out of that indifference.

Let’s start with intrinsic objections to robot sex. These are objections that claim that there is something inherently wrong about having sex with a human artifact, no matter how sophisticated it may be. I find this kind of objection hard to credit. Unless one adopts an extreme, Catholic, natural-law like view of permissible sex — in which only procreative or procreative-type sexual acts are permissible — there would seem to be little to object to in the notion of robot sex.

Still, there are better and worse arguments that can be made on this issue. In my opinion, Levy makes a bad one (in his article) by drawing an analogy between vibrators and sexbots (p. 227). Roughly, his argument is:

  • (4) Using vibrators to achieve orgasm is permissible.
  • (5) Using a sexbot to achieve orgasm is similar in all important respects to using a vibrator.
  • (6) Therefore, using a sexbot to achieve orgasm is permissible.

The problem here is that premise (5) may or may not be true. It depends on the degree of sophistication of the sexbot. If the sexbot is essentially a lifeless artifact with no autonomy, then the argument would be reasonably compelling. This may well be the position we are at with contemporary sex dolls and the like. But if the sexbot has some sophisticated AI, and some semblance of autonomy and personhood, the situation is rather different. In that case the conditions under which sex with robots is permissible, will tend to become equivalent to the conditions under which sex with ordinary human beings is permissible.

Levy seems to acknowledge this point later in his article when he discusses the possibility of artificially conscious robots. But I would say that any discussion of “consciousness” is a distraction here. Whether robots should be treated with the same ethical respect as humans does not depend on whether or not they are conscious (after all, how could even know this?). Rather, it depends on whether or not they display the external evidential marks of personhood. If they do, we should err on the side of caution and treat them equivalently to human beings (or so I believe).

One slight hiccup to bear in mind is Petersen’s “Robot Slave” argument, which I have covered on the blog before. If Petersen is right then it is permissible to create robot slaves and, naturally, this would cover robot sex slaves. I pass no judgment on the success of that argument here though.

3. Extrinsic Objections to Robot Sex
If there is nothing intrinsically wrong with having sex with robots, then we must consider the possibility that there is something extrinsically wrong about it. Levy mentions three possible extrinsic concerns. Let’s go through them briefly.

The first extrinsic concern that Levy mentions has to do with the stigma that might be experienced by the users of sexbots. Of course, this isn’t really an ethical concern. Whether or not users deserve to be stigmatised is something that is driven by ethical conclusions, not something that itself drives us toward ethical conclusions. At best, the likelihood of stigma makes engaging in robot sex prudentially unwise, which it may well be, but it doesn’t render it ethically impermissible.

The second extrinsic concern has to do with how ordinary human partners of sexbot users might feel. Will the use of sexbots be viewed as akin to infidelity? Will it harm ordinary human relationships? There are several things to be said on this matter. First, this will only be relevant to a certain sub-group of sexbot users, i.e. those with ordinary human partners. Second, the attitude toward sexbot use is likely to vary considerably across relationships. As Levy points out, some partners might feel threatened by it, but others may embrace it as it could free them up from sexual demands, or could be used to “spice things up”. In short, whether or not it is wrong to use a sexbot will have to be determined within the particular context of an actual relationship, and not in the abstract.

The third extrinsic concern has to do with the effect of sexbots on human sex workers. Is it possible that human sex workers will be rendered unemployed by the ready-availability of sexbots? Would this be a good or bad thing? I find this to be the most interesting extrinsic ethical concern, so much so that I’ve decided to write a paper on technological unemployment and sex work. Levy says relatively little about it in his article. He accepts that it might happen, but says that it might be good (if we think human sex work is morally objectionable), or bad (since human sex workers are a vulnerable sector of the population who might be rendered more vulnerable by technological unemployment). My own feeling is that sex work may be less vulnerable to technological unemployment than other industries, not least because technological unemployment in other industries may drive people into sex work. That could definitely have significant social and ethical implications, ones which I hope to explore in the paper I am writing.

So where does that leave us? Well, I think it leaves us with a set of objections to robot sex that are not particularly persuasive, partly because they are dependent on contingencies that cannot be evaluated in the abstract, and partly because they rely on difficult-to-make predictive arguments. (Edit, added on the 12/10/13: there are other extrinsic concerns one could raise. For example, one could claim that people are likely to be violent or extremely perverse in their relations with sexbots, and this might encourage them to be violent and perverse with human beings. But, again, this relies on dubious assumptions about how people are likely to behave with sexbots, and, in any event, could cut both ways: maybe having non-human robots upon whom we can act out our disturbing sexual fantasies will make things better for humans).

There is one other objection we have yet to consider…

4. The Most Plausible Argument Against Sex with Robots?
Futurama fans will be aware of the ethical perils of robot sex. In the third season episode “I dated a robot”, Fry falls in love with a robot with the holographic personality and image of Lucy Liu. His colleagues and friends warn him that he is doing a terrible thing: one shouldn’t have an emotional and sexual relationship with a robot. But Fry isn’t aware of the perils, having only recently been transported from the 20th Century to the 30th.

To educate him about the problem, they play him an old public health film: “Don’t Date Robots”. As the film explains, everything that is good or worthwhile (and some of what is bad and not so worthwhile) about civilisation (art, music, science, technology, sport etc.), is actually driven by the motivation to find a willing (human) romantic and sexual partner. If you take away that motivation, civilisation collapses. Unfortunately, this is exactly what the ready-availability of robot partners does. If all one needs to do to find a willing partner is to download one from a database, and include in that download all of one’s preferred characteristics, then all the striving and yearning that made civilisation possible will disappear.

We can call this the Futurama argument against sex with robots:

  • (7) If we remove the motivation to find a willing (human) partner, civilisation will collapse.
  • (8) Engaging with a robot sexual partner will remove that motivation.
  • (9) Therefore, if we start having sex with robots, civilisation will collapse.

The Futurama Argument is undoubtedly silly and hyperbolic. But part of me thinks it might be onto something…


John Danaher holds a PhD from University College Cork (Ireland) and is currently a lecturer in law at Keele University (United Kingdom). His research interests are eclectic, ranging broadly from philosophy of religion to legal theory, with particular interests in human enhancement and neuroethics. John blogs at You can follow him on twitter @JohnDanaher.
Print Email permalink (6) Comments (6640) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Well written article with good points for discussion.
On the subject of emotional attachment (and the ethics dilemma), if highly trained soldiers can get emotionally attached to robots might be reasonable to argue that every-day people might get attached to their so-called “robot companions”

I’d written an essay on IEET on the subject titled: “We are transitional Humans” - that touches on the Ethics of Robot Sex.

I agree, good points all around.

One small point - remember that the uncanny valley -is- a valley. Robots are not creepy, until they get close to but not quite human appearance, but again become not-creepy when they appear very human-like.

If we want to consider vibrators robots, then they’re not creepy because they’re very unlike humans. A Real Doll, on the other hand, is human-like but arguably inside the uncanny valley and so creepy. Cylons, for instance, were very human like and again not creepy (at least the ones that took a human form.)

I think we need to ask whether “pleasure and emotional connectivity” are actually ‘reasons’ for regular sex.  Keenly, the author called them ‘motivations,’ but then later implied they could be reasons.  I don’t think they can be reasons, because I can’t understand them in a universal sense.  They are subjective goals, which doesn’t guarantee they will constitute a rational benefit we can all understand.  A rational benefit would be something like “friendship,” or “intercourse.”  Pleasure and emotional charges, on the other hand, are only as good as the acts they accompany, so can we really begin with them as ethical ‘reasons’?

A discussion about robot sex ought include the ground breaking movie “Her,” about a guy who falls in love with his phone’s AI.  When I first heard the plot I thought it was ridiculous, but after I watched the trailer I feel in love with her too.

There are a lot of people who are rather socially immature or isolated and would probably have strong feelings for an AI like in her.  Give her a sexy body, and even an introvert would get aroused.  Hard to see the harm - it doesn’t seem to be hurting anyone.

There is, of course, the mindedness argument. When an AI/Artilect displays the capability of actual thought, then forcing “her” to perform sex acts absent choice becomes just as unethical as trafficking in humans.

The thing is most people forget about that which is iniquitous in the exchange.

No exchange is inherently perfect or fair.

In a sense our our current olduvian(sic) physical form “rapes” us of true free choice.

To create new forms at will that are more aligned with our ideas of sex, sexuality and pleasure will be a brave new world worth the investment.

Imagine having the equivalent of an orgasm while in the form of a robo-cat on the lap of robo-sex bot because petting you elicits such an orgasmic response.

...or being humped by robo-sex dog while in the form of a robo-cat ?!  Meow, bark, woof !

To what extent are the gender roles themselves responsible for our sexual behavior, perhaps guided by physical attributes, perhaps cognitive ones.

With robots we can explore our sexuality free from the confines of our bodies, from our genitalia, from most all convention.

If i download a sentient copy of my mind into a robot which has no penis or vagina, and another person does the same, nearly any sexual scenario is possible.

A truly formless tabula rasa is what a robot represents to mankind.

As exchanges with orgasmic potential are no longer limited by primitive designs of gender specific organs.

Orgasmic spots could be placed anywhere on the sex-robots, and the method of their activation, equally as non-specific.

In the near future our minds will be able to be wired to smart-electrodes and implanted chips so that we can get high or have an orgasm with a key-stroke, or a bio-feed back mechanism.

It would then be possible to cause orgasm by the rubbing of the bottom of the feet, running or jogging barefoot would be an amazing site to behold.

I’m not sure if you can morally objectify this, surly one can make false protestations, reveling in ones own superstitious enchantment with moral superiority.

Which surly in regards to moral dilemma is the greater “sin” !

YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: On Douglas Engelbart, Intelligence Amplification and Argument Mapping

Previous entry: Objectifying Humans


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
Williams 119, Trinity College, 300 Summit St., Hartford CT 06106 USA 
Email: director @     phone: 860-297-2376