Right now it’s Sunday afternoon. There is large pile of washed, but as yet un-ironed clothes on a seat in my living room. I know the ironing needs to be done, and I’ve tried to motivate myself to do it. Honestly. The ironing board is out, as is the iron, I have lots of interesting things I could watch or listen to while I do the ironing, and I have plenty of free time in which to do it. But instead I’m in my office writing this blog post. Why?
Hard determinism aside, what would be the point? Is there some benefit to be realized by person-robot-slaves that isn’t available with non-person-robot-slaves in the morally thick sense of the word?
The second point is more of a meta-point. Assuming that we created other person-robots, it’s a good bet that they would frown upon a class of their people being created for slavery, even if that slavery made the slaves happy. I suspect that humans would do the same if, say, we genetically engineered a slave class of humans that responded to their bondage in the way you suggest.
The act of creation says something about us, I think, that may be even more relevant than the desires of the slave-people-robots. It also strikes me as a good way to edge closer to those existential risks some people are concerned about with AI.
What would be the point? Maybe none, but maybe a person-robot would have the intelligence and flexibility to respond to human needs in a way that non-person-robot slaves could not. Peterson alludes to this in his article by discussing the advantages of programming robot slaves with the general desire to please and satisfy humans, not (say) the specific desire to do the ironing. Of course, this could indeed edge us closer to AI risk concerns.