Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Summa Technologiae, Or Why The Trouble With Science Is Religion

Technoprogressive Declaration - Transvision 2014

Transhumanism: A Glimpse into the Future of Humanity

Brain, Mind, and the Structure of Reality

How America’s Obsession With Bad Birth Control Hurts and Even Kills Women

A decade of uncertainty in nanoscale science and engineering


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

Rick Searle on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 23, 2014)

CygnusX1 on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 23, 2014)

jhughes on 'Technoprogressive Declaration - Transvision 2014' (Nov 23, 2014)

Omar Immortalist Gatti on 'Technoprogressive Declaration - Transvision 2014' (Nov 23, 2014)

Khannea Suntzu on 'Technoprogressive Declaration - Transvision 2014' (Nov 23, 2014)

Peter Wicks on 'Pastor-Turned-Atheist Coaches Secular Church Start-Ups' (Nov 23, 2014)

Steve Fuller on 'Technoprogressive Declaration - Transvision 2014' (Nov 22, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Why Running Simulations May Mean the End is Near
Nov 3, 2014
(20644) Hits
(14) Comments

Does Religion Cause More Harm than Good? Brits Say Yes. Here’s Why They May be Right.
Nov 18, 2014
(19161) Hits
(1) Comments

2040’s America will be like 1840’s Britain, with robots?
Oct 26, 2014
(14528) Hits
(33) Comments

Decentralized Money: Bitcoin 1.0, 2.0, and 3.0
Nov 10, 2014
(8600) Hits
(1) Comments



IEET > Vision > Futurism > Contributors > Massimo Pigliucci

Print Email permalink (4) Comments (3120) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


LessWrong on morality and logic


Massimo Pigliucci
By Massimo Pigliucci
Rationally Speaking

Posted: Jan 24, 2013

There has been a debate on morality brewing of late over at LessWrong. As readers of this blog know, I am not particularly sympathetic to that outlet (despite the fact that two of my collaborators here are either fans or even involved in major ways with them — see how open minded I am?). Largely, this is because I think of the Singularity and related ideas as borderline pseudoscience, and have a hard time taking too seriously a number of other positions and claims made at LW. Still, in this case by friend Michael DeDora, who also writes here [Rationally Speaking], pointed me to two pieces by Eliezer Yudkowsky and one of the other LW authors that I’d like to comment on.

Yudkowsky has written a long (and somewhat rambling, but interesting nonetheless) essay entitled “By Which It May Be Judged” in which he explores the relationship between morality, logic and physics. A few days later, someone named Wei Dai wrote a brief response with the unambiguously declarative title “Morality Isn’t Logical,” in which the commenter presents what he takes to be decisive arguments against Yudkowsky’s thesis. I think that this time Yudkowsky got it largely right, though not entirely so (and the part he got wrong is, I think, interestingly indicative), while Dai makes some recurring mistakes in reasoning about morality that should be highlighted for future reference.

 

Let’s start with Yudkowsky’s argument then. He presents a thought experiment, a simple situation leading to a fundamental question in ethical reasoning: “Suppose three people find a pie — that is, three people exactly simultaneously spot a pie which has been exogenously generated in unclaimed territory. Zaire wants the entire pie; Yancy thinks that 1/3 each is fair; and Xannon thinks that fair would be taking into equal account everyone’s ideas about what is ‘fair’.” He continues: “Assuming no relevant conditions other than those already stated, ‘fairness’ simplifies to the mathematical procedure of splitting the pie into equal parts; and when this logical function is run over physical reality, it outputs ‘1/3 for Zaire, 1/3 for Yancy, 1/3 for Xannon.’”

 

Setting aside fancy talk of logical functions being run over physical reality, this seems to me exactly right, and for precisely the reasons Yudkowsky goes on to explain. (I will leave it to the unconvinced reader to check his original essay for the details.) He then tackles the broader question of skepticism about morality — a surprisingly fashionable attitude in certain quarters of the skeptic/atheist (but not humanist) community. Yudkowsky of course acknowledges that we shouldn’t expect to find any cosmic Stone Tablet onto which right and wrong are somehow written, and even mentions Plato’s Euthyphro as the 24 centuries-old source of that insight. Nonetheless, he thinks that “if we confess that ‘right’ lives in a world of physics and logic — because everything lives in a world of physics and logic — then we have to translate ‘right’ into those terms somehow.” Don’t know why that would be a “confession” rather than a reasonable assumption, but I’m not going to nitpick [1].

Yudkowsky proceeds by arguing that there is no “tweaking” of the physical universe one can try to make it become right to slaughter babies (I know, his assumption here could be questioned, but I actually agree with it — more later). And continues with his punchline: “But if you can’t make it good to slaughter babies by tweaking the physical state of anything — if we can’t imagine a world where some great Stone Tablet of Morality has been physically rewritten, and what is right has changed — then this is telling us that what’s ‘right’ is a logical thingy rather than a physical thingy, that’s all. The mark of a logical validity is that we can’t concretely visualize a coherent possible world where the proposition is false.”

But wait! Doesn’t Yudkowsky here run a bit too fast? What about Moore’s open question, which Yudkowsky rephrases (again, unnecessarily) as “I can see that this event is high-rated by logical function X, but is X really right?” His answer to Moore comes in the form of another thought experiment, one in which we are invited to imagine an alternative logical function (to the one that says that it isn’t right to slaughter babies), one in which the best possible action is to turn everything into paperclips. Yudkowsky argues that “as soon as you start trying to cash out the logical function that gives betterness its truth-value, it will output ‘life, consciousness, etc. and paperclips,’” finally concluding that “where moral judgment is concerned, it’s logic all the way down. ALL the way down.”

And that’s where he goes wrong. As far as I can tell he simply sneaked in the assumption that life and consciousness are better than paperclips, but that assumption is entirely unjustified, either by logic or by physics. It is, of course, perfectly justified by something else: biology, and in particular the biology of conscious social animals such as ourselves (and relevantly similar beings in the rest of the universe, let’s not be unnecessarily parochial). [2]

Even though he mentions the word, Yudkowsky seems to have forgotten that logic itself needs axioms (or assumptions) to get started. There has to be an anchor somewhere, and when it comes to reasoning about the physical world those axioms come in the form of brute facts about how the universe is (unless you think that all logically possible universes exist in a strong sense of the term “exist,” a position actually taken by some philosophers, but which we will not pursue here). Specifically, morality makes sense — as Aristotle pointed out — for beings of a certain kind, with certain goals and needs in life. The axioms are provided by human nature (or, again, a relevantly similar nature). Indeed, Yudkowsky grants that an intuitive moral sense likely evolved as an emotion in response to certain actions performed by other members of our in-group. That’s the “gut feeling” we still have today when we hear of slaughtered children but not of paperclip factories. Moral reasoning, then, aims at reflecting and expanding on our moral instincts, to bring them up to date with the complexity of post-Pleistocene environments.

So morality has a lot to do with logic — indeed I have argued that moral reasoning is a type of applied logical reasoning — but it is not logic “all the way down,” it is anchored by certain contingent facts about humanity, bonoboness and so forth.

Which brings me to Dai’s response to Yudkowsky. Dai’s perspective is that morality is not a matter of logic “in the same sense that mathematics is logical but literary criticism isn’t: the ‘reasoning’ we use to think about morality doesn’t resemble logical reasoning. All systems of logic, that I’m aware of, have a concept of proof and a method of verifying with high degree of certainty whether an argument constitutes a proof.”

Maybe Dai would do well to consult an introductory book on logic (this is a particularly accessible one). Logic is not limited to deductive reasoning, but it includes also inductive and probabilistic reasoning, situations where the concept of math-like proof doesn’t make sense. And yet logicians have been able to establish whether and to what degree different types of inductive inferences are sound or not. I agree that literary criticism isn’t about logic, but it doesn’t follow that philosophical reasoning — and particularly ethical reasoning — isn’t either. (When something doesn’t logically follow from something else, and yet one insists that it does, the person in question is said to be committing an informal logical fallacy, in this specific case a non sequitur.)

Dai does present some sort of positive argument for why ethical reasoning isn’t logical: “people all the time make moral arguments that can be reversed or called into question by other moral arguments.” But that’s exceedingly weak. People also deny empirical facts all the time (climate change? Evolution? Vaccines and autism?) without this being a good argument for rejecting those empirical facts.

Of course if by “people” Dai means professional moral philosophers, then that’s a different story. And yes, professional moral philosophers do indeed disagree, but at very high levels of discourse and for quite technical reasons, as is to be expected from specialized professionals. I am not trying to argue that moral philosophy is on par with mathematics (not even Yudkowsky is going that far, I think), I’m simply trying to establish that on a range from math to literary criticism ethical reasoning is closer to the former than to the latter. And that’s because it is a form of applied logic.

Dai is worried about a possible implication of Yudkowsky’s approach: that “a person’s cognition about morality can be described as an algorithm, and that algorithm can be studied using logical reasoning.” I don’t know why people at LessWrong are so fixated on algorithms [3], but no serious philosopher would think in terms of formal algorithms when considering (informal) ethical reasoning. Moreover, since we know that it is not possible to program a computer to produce all and only the truths of number theory (which means that mathematical truths are not all logical truths), clearly algorithmic approaches run into severe limitations even with straight math, let alone with moral philosophy.

So here is the take-home stuff from the LW exchange between Yudkowsky and Dai: contra the latter, and similar to the former, ethical reasoning does have a lot to do with logic, and it should be considered an exercise in applied logic. But, despite Yudkowsky’s confident claim, morality isn’t a matter of logic “all the way down,” because it has to start with some axioms, some brute facts about the type of organisms that engage in moral reasoning to begin with. Those facts don’t come from physics (though, like everything else, they better be compatible with all the laws of physics), they come from biology. A reasonable theory of ethics, then, can emerge only from a combination of biology (by which I mean not just evolutionary biology, but also cultural evolution) and logic. Just like Aristotle would have predicted, had he lived after Darwin.

———

[1] If I were to nitpick, then I would have to register my annoyance with a paragraph in the essay where Yudkowsky seems not to get the distinction between philosophy of language and logic. I will leave the reader to look into it as an exercise, it’s the bit where he complains about “rigid designators.”

[2] Yes, yes, biological organisms are made of the same stuff that physics talks about, so aren’t we still talking about physics? Not in any interesting way. If we are talking metaphysically, that sort of ultra-reductionism skips over the possibility and nature of emergent properties, which is very much an open question. If we are talking epistemically, there is no way Yudkowsky or anyone else can produce a viable quantum theory of social interactions (computationally prohibitively complex, even if possible in principle), so we are back to biology. At the very least, this is the right (as in most informative) level of analysis for the problem at hand.

 

[3] Actually, I lied. I think I know why LW contributors are fixated with algorithms: because they also tend to embrace the idea that human minds might one day be “uploaded” into computers, in turn based on the idea that human consciousness is a type of computation that can be described by an algorithm. Of course, they have no particularly good reason to think so, and as I’ve argued in the past, that sort of thinking amounts to a type of crypto-dualism that simply doesn’t take seriously consciousness as a biological phenomenon. But that’s another story.


Massimo Pigliucci has a Doctorate in Genetics from the University of Ferrara (Italy), a PhD in Evolutionary Biology from the University of Connecticut, and a PhD in Philosophy from the University of Tennessee. He has done post-doctoral research in evolutionary ecology at Brown University and is currently Chair of the Philosophy Department at Lehman College and Professor of Philosophy at the Graduate Center of the City University of New York. His research interests include the philosophy of biology, in particular the structure and foundations of evolutionary theory, the relationship between science and philosophy, the relationship between science and religion, and the nature of pseudoscience.
Print Email permalink (4) Comments (3121) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Massimo, so far as you went, I think I agree, although I’m not sure I was always following your selection of vocabulary. I’d say morality must account for actual anatomical desires and the tensions and conflicts between and among them. This seems to me to be what you’re calling “biology”. You say this isn’t physics, although I suspect Eliezer would say that biology is a manifestation of physics, which you seem to acknowledge by saying biology must be compatible with physics. I’d say mortality must account for actual environmental laws quite as much as it must account for desires, so I think I’m on board with you there.

Here’s where I think perhaps you’ve not gone far enough. I’m not persuaded that the reductionist worldview should be privileged above (or below) the holistic worldview. I see chicken-and-egg style feedback loops between the two, between us as individuals/communities and our substrate as anatomies/environments. As our morality should account for desires and laws, so it should account for actual individual wills and actual communal rules. It seems to make as much sense to say it’s esthetics all the way up as to say it’s logic all the way down. I see ethics as the meeting place of esthetics and epistemics, with both art and logic (feelings and reasons) playing our in our moral reasoning. Maybe you’re suggesting the same when you state that ethical reasoning is somewhere between math and literary criticism?





“But, despite Yudkowsky’s confident claim, morality isn’t a matter of logic “all the way down,” because it has to start with some axioms, some brute facts about the type of organisms that engage in moral reasoning to begin with”
But logic itself also starts with axioms. Emotions are logic all the way down, just a different kind of logic. Whereas conventional logic’s axioms are discrete, concise little rules for symbolic transformation, an axiom of emotional logic (“exiom”?) is something resembling a point along various spectra, something resembling the Big Five personality traits/factors. This point mingles with the other points to bias the direction the train of thought goes in.

So yes, while philosophers discussing ethical situations or whatever else conduct their discussion using highly technical concepts resembling the sort of symbolic manipulation at play in formal logic, if you interrogate for long enough how they got to that highly abstracted technical concept, you’ll find an exiom that’s discrepant from that/those of their opponent.





“Here’s where I think perhaps you’ve not gone far enough. I’m not persuaded that the reductionist worldview should be privileged above (or below) the holistic worldview.”

Right. What I’m obsessed with discovering is can Christians (not to single them out for critiques), say, continue to change the physical world in a dislocative quantum way yet retain their holistic Newtonian worldview? can they continue to perceive their minds/souls/spirits as holistic, but the physical world to be chopped up and rearranged?—industrialism has done just that.
Most do not think on these matters, they are do-ers, not thinkers; Christians are pragmatic as to the physical world, idealistic in moral questions. However there has always been that pragmatic-idealistic dialectic: my query concerns the future.

“I’d say mortality must account for actual environmental laws quite as much as it must account for desires, so I think I’m on board with you there.”

Lincoln,
you meant to write morality, not mortality, correct?





Wow, Massimo,

You have almost perfectly explained Sam Harris’ The Moral Landscape.

Well, at least a portion of it, where he explains that morality, in our case, is pretty much biologically constrained.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Innovation for job creation

Previous entry: Technology Kills Middle Class Jobs

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376