Printed: 2020-09-18

Institute for Ethics and Emerging Technologies

IEET Link:

A.I. Special Pleading

Kyle Munkittrick

Pop Transhumanism

February 07, 2010

Special pleading, along with feigned neutrality, is one of the most infuriating symptoms of faulty rhetoric one can utilize in an argument.

Special pleading comes in multiple forms, but the most common is that of claiming a superior framework which is proven to be superior by its own internal criterion. Vulgar Marxism and Freudian psychoanalysis both resort to this tactic by using lines like, “that you would argue against the Revolution is proof you are bourgeoisie and do not understand” or “your denial is proof of your repressed desires.” The point is that any criticism can be fallaciously transformed into proof of the original claim or be fallaciously disregarded because the critic is inherently limited by his or her own paradigm.

Kaj Sotala, Roko Mijic, and Michael Anissimov all use special pleading when critiquing James Hughes’ piece “Liberal Democracy vs Technocratic Absolutism.” The central rebuttal for all of them can be paraphrased as “your critiques of communism, dictatorships, and other authoritarian governments make sense for humans, but don’t apply to Friendly AI because Friendly AI is different than human systems and is genuinely selfless.” Hughes hears echoes of Marxist-Leninist thought in that point.

imageSome thinkers, including the allegedly brilliant philosopher Slavoj Zizek, continue to defend Marxism using special pleading. Instead of claiming communism isn’t based in humans, they claim Stalin and the USSR were not pure communism, and therefore were doomed to failure because of the corrupting element of capitalism. Thus, thanks to special pleading, Stalin is not proof that communism and authoritarianism are dangerous and bad, but that capitalism is bad and corrupts the pure motives of communism.

The problem is that, like communism, friendly AI, even if derived through the process described by the CEV, will ultimately fail. The reason democracy works even remotely better than authoritarian systems is because it openly admits and aims to minimize the faults in the system. These faults include both the “programming,” that is, the legislation and philosophy underpinning it, and the agents of the system, humans. Democracy, communism, and, yes, AI-based technocratic authoritarianism, are all human systems. They will be imperfect. Democracy, of the three, is the only one that sees itself as imperfect and prone to mistakes and failure. Therein lies the inherent benefits of democracy – it is a radically reflexive system.

As a final point, I think it is very interesting that those who support friendly super-AI don’t see the AI coming to the conclusion that nearly all forms of government, particularly those of an authoritarian breed, are faulty and instead advocating anarchy or a form of hyper-limited government. That the AI would want to govern at all is a further assumption I don’t understand. Assuming it’s an AI, it should be volitional, which would make forcing it to govern a restriction in its will or it would make it a program, not a genuine AI. There are just too many problems here.

Kyle Munkittrick, IEET Program Director: Envisioning the Future, is a recent graduate of New York University, where he received his Master's in bioethics and critical theory.

Nicole Sallak Anderson is a Computer Science graduate from Purdue University. She developed encryption and network security software, which inspired the eHuman Trilogy—both eHuman Dawn and eHuman Deception are available at Amazon, the third installment is expected in early 2016. She is a member of the advisory board for the Lifeboat Foundation and the Institute for Ethics and Emerging Technologies.


Contact: Executive Director, Dr. James J. Hughes,
IEET, 35 Harbor Point Blvd, #404, Boston, MA 02125-3242 USA
phone: 860-428-1837