LessWrong on morality and logic
Massimo Pigliucci
2013-01-24 00:00:00
URL

Yudkowsky has written a long (and somewhat rambling, but interesting nonetheless) essay entitled “By Which It May Be Judged” in which he explores the relationship between morality, logic and physics. A few days later, someone named Wei Dai wrote a brief response with the unambiguously declarative title “Morality Isn’t Logical,” in which the commenter presents what he takes to be decisive arguments against Yudkowsky’s thesis. I think that this time Yudkowsky got it largely right, though not entirely so (and the part he got wrong is, I think, interestingly indicative), while Dai makes some recurring mistakes in reasoning about morality that should be highlighted for future reference.

 

Let’s start with Yudkowsky’s argument then. He presents a thought experiment, a simple situation leading to a fundamental question in ethical reasoning: “Suppose three people find a pie — that is, three people exactly simultaneously spot a pie which has been exogenously generated in unclaimed territory. Zaire wants the entire pie; Yancy thinks that 1/3 each is fair; and Xannon thinks that fair would be taking into equal account everyone’s ideas about what is ‘fair’.” He continues: “Assuming no relevant conditions other than those already stated, ‘fairness’ simplifies to the mathematical procedure of splitting the pie into equal parts; and when this logical function is run over physical reality, it outputs ‘1/3 for Zaire, 1/3 for Yancy, 1/3 for Xannon.’”

 

Setting aside fancy talk of logical functions being run over physical reality, this seems to me exactly right, and for precisely the reasons Yudkowsky goes on to explain. (I will leave it to the unconvinced reader to check his original essay for the details.) He then tackles the broader question of skepticism about morality — a surprisingly fashionable attitude in certain quarters of the skeptic/atheist (but not humanist) community. Yudkowsky of course acknowledges that we shouldn’t expect to find any cosmic Stone Tablet onto which right and wrong are somehow written, and even mentions Plato’s Euthyphro as the 24 centuries-old source of that insight. Nonetheless, he thinks that “if we confess that ‘right’ lives in a world of physics and logic — because everything lives in a world of physics and logic — then we have to translate ‘right’ into those terms somehow.” Don’t know why that would be a “confession” rather than a reasonable assumption, but I’m not going to nitpick [1].

Yudkowsky proceeds by arguing that there is no “tweaking” of the physical universe one can try to make it become right to slaughter babies (I know, his assumption here could be questioned, but I actually agree with it — more later). And continues with his punchline: “But if you can’t make it good to slaughter babies by tweaking the physical state of anything — if we can’t imagine a world where some great Stone Tablet of Morality has been physically rewritten, and what is right has changed — then this is telling us that what’s ‘right’ is a logical thingy rather than a physical thingy, that’s all. The mark of a logical validity is that we can’t concretely visualize a coherent possible world where the proposition is false.”

But wait! Doesn’t Yudkowsky here run a bit too fast? What about Moore’s open question, which Yudkowsky rephrases (again, unnecessarily) as “I can see that this event is high-rated by logical function X, but is X really right?” His answer to Moore comes in the form of another thought experiment, one in which we are invited to imagine an alternative logical function (to the one that says that it isn’t right to slaughter babies), one in which the best possible action is to turn everything into paperclips. Yudkowsky argues that “as soon as you start trying to cash out the logical function that gives betterness its truth-value, it will output ‘life, consciousness, etc. and paperclips,’” finally concluding that “where moral judgment is concerned, it’s logic all the way down. ALL the way down.”

And that’s where he goes wrong. As far as I can tell he simply sneaked in the assumption that life and consciousness are better than paperclips, but that assumption is entirely unjustified, either by logic or by physics. It is, of course, perfectly justified by something else: biology, and in particular the biology of conscious social animals such as ourselves (and relevantly similar beings in the rest of the universe, let’s not be unnecessarily parochial). [2]

Even though he mentions the word, Yudkowsky seems to have forgotten that logic itself needs axioms (or assumptions) to get started. There has to be an anchor somewhere, and when it comes to reasoning about the physical world those axioms come in the form of brute facts about how the universe is (unless you think that all logically possible universes exist in a strong sense of the term “exist,” a position actually taken by some philosophers, but which we will not pursue here). Specifically, morality makes sense — as Aristotle pointed out — for beings of a certain kind, with certain goals and needs in life. The axioms are provided by human nature (or, again, a relevantly similar nature). Indeed, Yudkowsky grants that an intuitive moral sense likely evolved as an emotion in response to certain actions performed by other members of our in-group. That’s the “gut feeling” we still have today when we hear of slaughtered children but not of paperclip factories. Moral reasoning, then, aims at reflecting and expanding on our moral instincts, to bring them up to date with the complexity of post-Pleistocene environments.

So morality has a lot to do with logic — indeed I have argued that moral reasoning is a type of applied logical reasoning — but it is not logic “all the way down,” it is anchored by certain contingent facts about humanity, bonoboness and so forth.

Which brings me to Dai’s response to Yudkowsky. Dai’s perspective is that morality is not a matter of logic “in the same sense that mathematics is logical but literary criticism isn’t: the ‘reasoning’ we use to think about morality doesn’t resemble logical reasoning. All systems of logic, that I’m aware of, have a concept of proof and a method of verifying with high degree of certainty whether an argument constitutes a proof.”

Maybe Dai would do well to consult an introductory book on logic (this is a particularly accessible one). Logic is not limited to deductive reasoning, but it includes also inductive and probabilistic reasoning, situations where the concept of math-like proof doesn’t make sense. And yet logicians have been able to establish whether and to what degree different types of inductive inferences are sound or not. I agree that literary criticism isn’t about logic, but it doesn’t follow that philosophical reasoning — and particularly ethical reasoning — isn’t either. (When something doesn’t logically follow from something else, and yet one insists that it does, the person in question is said to be committing an informal logical fallacy, in this specific case a non sequitur.)

Dai does present some sort of positive argument for why ethical reasoning isn’t logical: “people all the time make moral arguments that can be reversed or called into question by other moral arguments.” But that’s exceedingly weak. People also deny empirical facts all the time (climate change? Evolution? Vaccines and autism?) without this being a good argument for rejecting those empirical facts.

Of course if by “people” Dai means professional moral philosophers, then that’s a different story. And yes, professional moral philosophers do indeed disagree, but at very high levels of discourse and for quite technical reasons, as is to be expected from specialized professionals. I am not trying to argue that moral philosophy is on par with mathematics (not even Yudkowsky is going that far, I think), I’m simply trying to establish that on a range from math to literary criticism ethical reasoning is closer to the former than to the latter. And that’s because it is a form of applied logic.

Dai is worried about a possible implication of Yudkowsky’s approach: that “a person’s cognition about morality can be described as an algorithm, and that algorithm can be studied using logical reasoning.” I don’t know why people at LessWrong are so fixated on algorithms [3], but no serious philosopher would think in terms of formal algorithms when considering (informal) ethical reasoning. Moreover, since we know that it is not possible to program a computer to produce all and only the truths of number theory (which means that mathematical truths are not all logical truths), clearly algorithmic approaches run into severe limitations even with straight math, let alone with moral philosophy.

So here is the take-home stuff from the LW exchange between Yudkowsky and Dai: contra the latter, and similar to the former, ethical reasoning does have a lot to do with logic, and it should be considered an exercise in applied logic. But, despite Yudkowsky’s confident claim, morality isn’t a matter of logic “all the way down,” because it has to start with some axioms, some brute facts about the type of organisms that engage in moral reasoning to begin with. Those facts don’t come from physics (though, like everything else, they better be compatible with all the laws of physics), they come from biology. A reasonable theory of ethics, then, can emerge only from a combination of biology (by which I mean not just evolutionary biology, but also cultural evolution) and logic. Just like Aristotle would have predicted, had he lived after Darwin.

———

[1] If I were to nitpick, then I would have to register my annoyance with a paragraph in the essay where Yudkowsky seems not to get the distinction between philosophy of language and logic. I will leave the reader to look into it as an exercise, it’s the bit where he complains about “rigid designators.”

[2] Yes, yes, biological organisms are made of the same stuff that physics talks about, so aren’t we still talking about physics? Not in any interesting way. If we are talking metaphysically, that sort of ultra-reductionism skips over the possibility and nature of emergent properties, which is very much an open question. If we are talking epistemically, there is no way Yudkowsky or anyone else can produce a viable quantum theory of social interactions (computationally prohibitively complex, even if possible in principle), so we are back to biology. At the very least, this is the right (as in most informative) level of analysis for the problem at hand.



 


[3] Actually, I lied. I think I know why LW contributors are fixated with algorithms: because they also tend to embrace the idea that human minds might one day be “uploaded” into computers, in turn based on the idea that human consciousness is a type of computation that can be described by an algorithm. Of course, they have no particularly good reason to think so, and as I’ve argued in the past, that sort of thinking amounts to a type of crypto-dualism that simply doesn’t take seriously consciousness as a biological phenomenon. But that’s another story.