The idea of artificial slaves - and questions about their tractability - is present not only in the literature of modern times but also extends all the way back to ancient Greek sources; and it is present in the literature and oral history of the early modern period as well. Aristotle is the first to discuss the uses and advantages of the artificial slave in his Politics.
Great post! I look forward to reading your fascinating book.
I was curious if any thing you reviewed in the premodern literature or were aware of from the same period dealt with what I’ll call “the Creator’s dilemma” the idea that if you build a machine that crosses a threshold of intelligence it would require free will which thereby entails that you can no longer ethically treat it as an instrument?
Posted by kevinlagrandeur on 01/23 at 09:31 PM
Rick, first, thanks for your kind words. As for your question, in the Western cultures of the past there is little evidence of the dilemma you mention because of the way they rationalized slavery. The majority took Aristotle’s view that some intelligent beings were naturally subservient to others (I.e., Aristotle’s idea of the natural slave). Interestingly, they mostly used the power of language to distinguish this: the Greeks, for example, thought anyone who didn’t speak Greek was subhuman; the jewish doctrine on the golem specified its subhuman status rested on the fact that first, it wasnt very intellilgent, second, it was manmade, and most important, it was mute. This viewpoint of artificially intelligent things was common. And yes, I discuss this in my book!
Posted by CygnusX1 on 01/24 at 11:43 AM
Assuming the growing trend in automation displacing the Human workforce and it’s inefficiencies, growing mass unemployment, and yet also contemplating the vast benefits in employing Humanoid robots to cater for our social needs, social care and petty errands and duties also, perhaps we should be more worried about becoming too reliant?
And I guess this inherent Human laziness is apparent even now with lack of any real forward thinking about the near future consequences of robots utilised at large in society?
The master/Slave relationship sounds ugly, and just when you thought Human ethics has risen above this regressive thinking.. whoops! here we go again?
The rational view would seem to be, as long as robots remain machines and are not Self-reflexive to the level of achieving Human like consciousness, then they may remain machines, they will have no feelings to hurt, and no rights to worry about?
Yet Humans are fated to create the near perfect duplicate of Human like levels of intelligence and learning, and the manifestation of some “ghost in the machine” is not an entire impossibility, (in the purely non-mystical sense, a Self-reflexive intelligent system may well simulate an entity so well as to easily pass the Turing test and much farther beyond?)
I would say also that Free will or “will to action” can be easily programmed for a robot or AI machine without need for any reference to “freedom” at all? Such is with the aged argument and debate concerning Humans anyhow. Meaning, I would say that Self-reflexivity is of higher priority for declaring “Robots rights” than any illusions of Free will that may appear as consequence of this Self-reflexivity?
iRobot contemplates all of these ethical issues and prejudices.
Scenario 1. Robots enter society on a means tested and economically affordable and efficient manner, permitting Humans to acclimatise to relinquishing responsibilities and duties, and still yet deal with their own prejudices?
Scenario 2. Robots hit the consumer market with full force, and as your article highlights, Humans are not yet ready to deal with lack of responsibilities, overcoming their own laziness and boredom, and the result is increase in robot prejudice?
I hope Robots remain machines.. at least for the greater part of the earlier era, for the sake of all parties?