The ethical nonsense (and plutocratic convenience) of AI rights
Marcelo Rinesi
2020-06-19 00:00:00
URL

Other robots, in the future, I very much hope so. I can almost quote by heart that speech from Captain Picard, and don't get me started on how the Federation treats holographic AIs.

But no current or near-term software, whether or not it's labelled as an "AI," is any closer to a claim on human-equivalent personhood than (perhaps, and fascinatingly) an insect. My argument for this isn't a philosophical a priori, but rather encouragement to read any recent technical paper or, even better, to read the code of a contemporary AI framework and use it to train a new one. The response to this is, universally, an awed recognition of the irrelevance of humanity to perform as well as we do, or significantly better, on traditionally "intrinsically human" activities like playing complex games, doing rough translations, or solving problems requiring intuitive leaps. That a conceptually simple program can out-play us at Go is a profound discovery in empirical philosophy, yes, but not one of increasingly anthropomorphic software, but rather one of increasingly less anthropocentric cognition.

Decades of popular sci-fi, however, and the obvious human parallels, have primed us to see robot revolutions, pacific of not, first as a fictional trope and then as a plausible scenario, and "robot ethics" occupies a far larger space in public consciousness than "the ethical use of robots." It makes for a much more interesting story, novel but still with easy emotional resonances, one that combines some of our most interesting and increasingly ubiquitous technologies and companies with familiar scenarios everybody can engage with.

It is also, let us say... convenient for those who build and own AIs. A bulldozer with faulty brakes that kills a kid is an ethical lapse by company management, a robot sentry that shoots a kid is preemptively being framed as the robot's ethical lapse. Even if, in practice, the bulldozer's software might be every bit as complex as the sentry's. There's something in many cultures, and perhaps singularly strongly in American culture, that makes many people subconsciously feel that the ability to use a weapon is somewhat a sufficient qualification for personhood.

This ethical shell game is by now a familiar one. Much like the private equity con of siphoning away assets and leaving behind liabilities in the abandoned ruins of a company, billionaire-targeted counter-cyclical financial support, or the almost-Medieval ontological monstrosity of "corporations are people," the discussion of AI personhood, which is in fact the discussion of the personhood of industrial equipment and consumer products, serves mostly to transfer ethical and PR liabilities away from the people who build and deploy them — the engineers, managers, military leaders, politicians.

It would also lead to a botification of the work environment, not in the technical sense of increased technological intensity, but rather in the political sense of a degradation of its usefulness as a social arena. Imagine the bot- and troll-infested nightmares that are Twitter and Facebook, but as your workplace (what can you say of a technological "automatization" paradigm that still requires a significant number of low-paid workers, and leaves them exposed to the almost certainty of PTSD?). A world of business software that has to behave ethically is a world where managers have one more layer of indirection not to have to. As a company executive "Company AI turns out to be racist" is a much better headline than "Racist company executives build a product with predictably racist outcomes." No matter when you happened to be reading this, I'm sure there'll be an example or two fresh in your mind.

Robot rights is a fascinating topic intellectually, narratively, and philosophically, and it holds important value in all of these areas. It might resonate even more strongly nowadays, as in many societies the nature and extent of human rights are being pushed and fought over in both directions, in what might be an epochal threshold of still uncertain outcome. But robot rights and obligations are a political and regulatory red herring, relatively minor but potentially significant.