I am currently editing a book with Neil McArthur on the social, legal and ethical implications of sex robots. As part of that effort, I’m trying to develop a clearer understanding of the typical objections to the creation of sex robots. I have something of a history on this topic. I’ve developed objections to (certain types of) sex robots in my own previous work; and critiqued the objections of others, such as the Campaign Against Sex Robots, on this blog. But I have yet to step back and consider the structural properties these objections might share.
So that’s what I’m going to try to do in this post. I was inspired to do this by my recent re-reading of Sinziana Gutiu’s paper ‘Sex Robots and the Roboticization of Consent’. In the paper, Gutiu objects to the creation of sex robots on several grounds. As I read through her objections I began to spot some obvious structural similarities between what she had to say and what I and others have said. I think identifying these structural similarities allows one to see more clearly the strengths and weaknesses of these objections.
So here’s my plan of action. I’ll start by outlining what I take to be the core logical structure of these objections to sex robots. Then I’ll consider how Gutiu fleshes out this logical structure in her paper, and close with some general reflections on the value of this style of objection. Bear in mind, my goal here is not to critique or defend any particular set of views but rather to achieve greater analytical clarity. The hope is that this clarity could, in turn, be used to craft better critiques and defences. So, if you are looking for a very clear take on the merits or demerits of sex robots, you won’t find that in this post.
1. The Basic Logical Structure: Symbols and their Consequences
Assuming one does not adopt a natural law-type attitude toward sex — according to which any non-procreative sexual act would be ethically questionable — the main concern with the creation of sex robots seems to be with the symbolism and consequences of their creation and use. This dual concern is shared by the objections in Gutiu’s paper, my previous paper on robotic rape and robotic child sexual abuse, and the arguments put forward by the campaign against sex robots. As a result, I believe the following schematic argument can capture these concerns:
(1) Sex robots do/will symbolically represent ethically problematic sexual norms. (Symbolic Claim)
(2) If sex robots have ethically problematic symbolic properties, then their development and/or use will have negative consequences. (Consequential Claim)
(3) Therefore, the development and/or use of sex robots will have negative consequences and we should probably do something about this. (Warning Call Conclusion)
Some comments about this abstract formulation are in order.
First, the ethically problematic symbolism of sex robots could take many forms. It could be that the physical representation of the robots embodies negative sexual stereotypes. People are particularly concerned about this since the sex robots that are currently in development seem to be targeted primarily towards heterosexual men and tend to represent a certain style of woman (some liken it to a ‘pornstar’-esque style). The behaviour or movement of these sex robots may be problematic as well, e.g. they may behave in an overly deferential, coquettish manner. It could also be that the act of having sex with a robot is symbolically problematic, perhaps they are designed to resist the user’s advances, thereby concocting a rape fantasy; or perhaps they are designed to be completely passive, ever-willing participants in sexual acts (something Gutiu worries about in her analysis). Perhaps even more symbolically worrying is the possibility of having sex robots that are designed to look and act like children, something I discuss in my article on robotic rape and robotic child sexual abuse, and has been mooted by others. Whatever the problematic symbolism may be, it is deemed important in this debate because most people presume that sex robots themselves will not be persons and so will not be harmed by interactions with human users. If the robots cannot be moral victims, their symbolism is all that is left.
Second, the negative consequences of the symbolism could also take many forms, some more immediate and direct than others. It could be that the user is directly and immediately harmed by the interaction with the robot. This is something I raised in my article on the topic, suggesting that anyone who had sex with a child sex robot or a rape fantasy robot may demonstrate a disturbing insensitivity to the social meaning of their act. It could be that the development and use of the robots sends a negative signal to the rest of society, perhaps reinforcing a culture of sexism, misogyny and/or sexual objectification. The interaction with the robot could also have downstream effects on the user, changing his/her interactions with other human beings and thereby having a harmful impact on them as well. All of these possibilities have been mooted in the literature to date. The negative consequences need not be a dead cert; they could have varying degrees of probability attached to them. This is normal enough in a debate about a nascent, emerging technology (heck, it’s normal enough in any debate about the consequences of technological usage). But the uncertainties may make it difficult to draw firm normative conclusions.
Third, the conclusion is something of a non-sequitur in its current form. The first part does follow logically from the premises; the second part does not. Nevertheless, I have tacked on this ‘warning call’ because I think it is common in the debate: most purveyors of these arguments think we ought to do something to minimise the potential negative consequences. What this ‘something’ is is another matter. Some people favour organised campaigns against the development of such devices; others favour strong to weak forms of regulation.
Anyway, that’s what I think the common abstract structure of these objections looks like. Let’s now consider a concrete version of this objection.
The version I am going to consider comes, of course, from Gutiu’s paper. I’ll start with her discussion of the symbolism of sex robots. The guiding assumption in her article is that the majority of sex robots will be targeted at heterosexual males and will depict a stereotypical ‘ideal’ woman. She defends this assumption by reference to literature (e.g. the long-standing trope of male protagonists constructing ideal female partners, present for instance in the Adam and Eve myth) and current examples of robotic technology. Some of these examples do not involve actual sexbots (i.e. robots designed for sexual use) but do involve gynoid robots (robots designed to look and act like women) that are highly sexualised:
Aiko, Actroid DER and F, as well as Repliee Q2 are representations of young, thin, attractive oriental women, with high-pitched, feminine voices and movements. Actroid DER has been demoed wearing either a tight hello kitty shirt with a short jean skirt, and Repliee Q2 has been displayed wearing blue and white short leather dress and high-heeled boots.
There are many other examples of this too. Thus, the physical structure of female robots alone serves to replicate arguably problematic norms of body shape, dress, and movement. If you add to this the idea that the robots are designed for sexual use, you compound the problematic symbolism. As Gutiu puts it:
To the user, the sex robot looks and feels like a real woman who is programmed into submission and which functions as a tool for sexual purposes. The sex robot is an ever-consenting sexual partner and the user has full control of the robot and the sexual interaction. By circumventing any need for consent, sex robots eliminate the need for communication, mutual respect and compromise in the sexual relationship. The use of sex robots results in the dehumanization of sex and intimacy by allowing users to physically act out rape fantasies and confirm rape myths.
She repeats this concern several times during the paper.
It seems, then, that Gutiu fleshes out the first premise of the argument in the following manner:
(1*) Sex robots will symbolically represent ethically problematic sexual norms because (a) the majority will adopt gendered norms of body shape, dress, voice and movement (e.g. they will be thin, large-breasted, provocatively clad, coquettish in behaviour and so on - this could vary from society to society); and (b) they will function as ever-consenting sexual tools, allowing users to act out rape fantasies and confirm rape myths.
Some people might find this symbolism disturbing by itself, but consequences are important in this debate. It is, after all, possible that symbolically problematic practices have beneficial consequences. Someone could argue that allowing someone to act out a rape fantasy with a sex robot is better than having them actually rape a real human being. The robot could, thus, have a beneficial preventative effect. I’m not sure how likely that is, but Gutiu is clear in her paper that the creation and use of sex robots will have negative consequences.
First, there are the obvious social harms, and harms to others, arising from the symbolism. If the robots replicate gendered norms of sexualised appearance and sexual compliance, they will contribute to and reinforce a patriarchal social order that is harmful to women. In particular, Gutiu worries that the symbolism will further distort our understanding of sexual consent. Campaigners have been fighting hard to make changes to the law surrounding rape and sexual assault. The changes made to date try to combat rape myths by clarifying the nature of sexual consent and assigning appropriate weight to the testimony of victims. Sex robots would represent a step back in this fight because:
They embed the idea that women are passive, ever-consenting sex objects, and teach users that when getting consent from a woman, “only no means no”.
In other words, they would go against the recent demand for positive affirmative signals of sexual consent. This could obviously have an impact on real women, who become victims of actual sexual assault and rape if users act out in the real world.
Second, in addition to the social harms and harms to others, there are the harms to the users themselves. For one thing, they could internalise the problematic sexual norms through repeated use of the robots, which could alter their moral character and the nature of their interactions with real people. Also, and somewhat in tension with this idea, the robots could reinforce antisocial tendencies among users, encouraging them to withdraw more from social interactions, and avoid the need for mutuality and compromise in their sexual lives.
This latter notion was contradicted in the film Lars and the Real Girl. There, the use of a sex doll was therapeutic and enabled an introverted man to reintegrate with society. But Gutiu dismisses this:
Although it was an effective approach to a Hollywood film, sex robots are unlikely to help antisocial users better interact with women. It is doubtful that an individual who does not feel accepted in society, and who finds an alternative way to meet their exact needs for companionship will, for some reason, want to integrate back into society, where they can risk rejection and face social discomfort.
This suggests to me that Gutiu fleshes out the second premise of the argument in the following manner:
(2*) If sex robots adopt gendered norms of body shape, dress, behaviour (etc), and function as ever-consenting sexual tools, their creation and use will: (a) reinforce patriarchal social norms and distort our understanding of sexual consent, which will ultimately harm women; and (b) will harm the users by encouraging them to internalise problematic sexual norms and, for some, exacerbate their antisocial tendencies.
This, in turn, leads to the ‘warning call’ conclusion. Gutiu thinks that something should be done to combat the problematic symbolism and likely negative consequences. She does not favour prohibition of sex robots. Instead, she favours various regulatory interventions. These could include, in particular, the demand that creators design robots in a certain way. They could also include the creative use of legal mechanisms to allow potential victims of harm arising from the use of sex robots to sue for damages. As an example, she suggests that a person whose marriage dissolves after their partner starts using a sex robot be allowed to sue the manufacturer. This might seem unusual, but there are legal mechanisms (so-called ‘heart balm torts’) that allow people to sue others for interfering with a legally protected relationship.
3. Concluding Thoughts
Hopefully, you can now see how the abstract argument scheme can be developed into something more concrete. I think there are several ways in which to challenge and support the argument developed by Gutiu. But I won’t say too much about them in this post. That wasn’t my intention. I’ll just close with three general comments. These flag-up issues I think are important or worthy of further consideration.
First, on the symbolic claim, I think it is generally true that sex robots appeal to stereotypical gendered norms of appearance and behaviour. You see this all the time in fictional depictions of sex robots (I think, in particular, of the robots in the TV series Humans and the movie Ex Machina which were used for sexual purposes, though not limited to sexual functionality). You also see it in Roxxy, the sex robot developed by TrueCompanion, and the prototypes being developed by RealDoll (LINKs). But I also think that the problematic symbolism could be addressed. The robots don’t have to adopt stereotypical appearances and behaviours. You could, for instance, design robots to give active, affirmative signals of consent. This may be an appropriate target for regulatory intervention or mass social pressure.
Second, despite what I just said, there is an interesting idea in Gutiu’s paper which suggests that there may be something inherent (or, at least, very strongly embedded) in the idea of a sex robot that makes it symbolically problematic. When you think about it, people are probably drawn to the creation and use such devices because they want an ultimately compliant and ever-willing sexual outlet. There wouldn’t be much point in them creating a sex robot that acted exactly like a human being — and could, therefore, avoid, resist or otherwise not reciprocate their sexual desires — since there are plenty of them around anyway. But this very thing that makes sex robots an attractive proposition, in and of itself, symbolically problematic. It represents sexual interactions as devoid of mutuality. Now, of course, people already engage in many solo sexual acts that are devoid of mutuality. And most would agree that there is nothing problematic (symbolic or otherwise) about those acts. But they are symbolically different: they do not involve embodied sexual contact with something that looks and acts (sorta) like a real human being. I don’t know what to make of this right now, but I think the notion that problematic symbolism is strongly embedded in sex robots is interesting. It means it may not be easy to address the symbolism through regulatory intervention or reform.
Third, and finally, the consequential claims that permeate this debate always strike me as being problematic. In many cases, the consequences appealed to are speculative (since the technology is not in widespread use) and indirect. As Anders Sandberg has argued elsewhere, it may indeed be true that the use of sex robots contributes to more harmful social environments and interactions with real human beings, but how tight is that causal connection likely to be? Is intervention into the development and use of sex robots likely to be the most effective way to combat these problems? Or could other policy levers be pulled to the same or better effect? These are all important questions when it comes to assessing the consequential claims and the warning calls that are issued in this debate.