Oh my, I thought I was done for a while chastising skeptics like Sam Harris on the relationship between philosophy, science and morality, and I just found out that my friend Michael Shermer has incurred a similar (though not quite as egregious as Harris’) bit of questionable thinking. As I explained in my review of Harris’ book for Skeptic, one learns precisely nothing about morality by reading The Moral Landscape. Indeed, one’s time on that topic is much better spent by leafing through Michael Sandel’s On Justice, for example.
Anyway, apologies for the repetition, but here we go again. (For a fuller explanation of how I think moral philosophy works, see here; on science and philosophy here and here. For how the whole philosophy-science-morality shebang hangs in, take a look at chapters 2, 3, 4 and 5 of Answers for Aristotle.)
Michael begins his piece by complaining that scientists have “conceded the high ground of determining human values, morals, and ethics to philosophers,” and arguing that this was a mistake. Indeed, Shermer says that such concession (when did it happen, by the way? Did the National Academy of Science pass a resolution under pressure from the American Philosophical Association?) comes at the worst time because new scientific tools and discoveries are gonna finally tell us from where we ought to get our values.
What are these tools? They include evolutionary ethics and neuroethics, among other fields. Now imagine that Michael had been talking about math instead of ethics. The idea would run something like this: “Scientists have conceded the high ground of resolving mathematical problems to mathematicians, just when the new disciplines of evolutionary mathematics and neuro-mathematics are coming on line.” My point is, I hasten to say, not that ethics is like math, but rather that evolutionary math and neuro-math would be giving us answers to different questions. An evolutionary approach to understanding our ability to reason mathematically could give us clues as to why we are capable of abstract thinking to begin with, which is interesting. “Neuro-mathematics” could then provide answers to the question of how the brain works when it engages in mathematical (and other types of abstract) thinking. But if you want to know how to prove Pythagoras' theorem, neither evolutionary biologists nor neurobiologists are the right kind of experts. You need a mathematician.
Similarly with ethics: we need an evolutionary understanding of where a strong sense of right and wrong comes from as an instinct, and a neurobiological account of how our brains function (or malfunction) when they engage in ethical reasoning. But it is the moral philosopher, not the evolutionary biologist or the neurobiologist, we should check with if we want to know whether a particular piece of ethical reasoning is logically sound or not.
This is not at all to say that science is irrelevant to ethical reasoning. No philosopher I know of holds to that absurd position (except perhaps a dwindling band of stubborn theologians). But more on that point later.
Shermer proceeds immediately by blaming the is/ought problem as the main culprit for scientists’ misguided concession to philosophers (even though I bet dollars to donuts that the overwhelming majority of scientists has never heard of the is/ought problem). Indeed, Michael claims that the problem is a fallacy (I take it he is using the term colloquially, since I don’t see that entry listed in the vast catalogue of fallacies that professional philosophers and logicians have accumulated.)
Why is the is/ought problem a fallacy, according to Shermer? Because “morals and values must be based on the way things are in order to establish the best conditions for human flourishing.” Let’s unpack (as philosophers are fond of saying) that loaded phrase. First off, there is a prescriptive claim (“must”) that is not actually argued for. Sounds like Michael is engaging in some a priori philosophizing of his own. Why exactly must we base morals and values on the way things are (as opposed to, say, they way we would like them to be)?
Second, “the way things are” has, of course, changed dramatically across centuries and cultures (science tells us this!). Which point in the space-time continuum are we going to pick as our reference to ground our scientific study of morality? We better not just assume that the our own current time and place represent the best of all possible worlds.
Third, “human flourishing” is a surprisingly slippery (and philosophically loaded!) concept, not at all easy to handle by straightforward quantitative analyses. (If you want an idea of the sort of complications I have in mind, take a look here and here.) And of course it should go without mention that the goal of increasing human flourishing is itself the result of a value choice that cannot possibly be grounded in empirical evidence. Nothing wrong with that, unless you insist on a scientistic take on the study of morality.
Shermer then gives his readers a list of things that science can help us understand (presumably, as opposed to philosophy): these are facts about the amount of variation in psychological traits that is genetically determined, basic information about reciprocal altruism, facts about punishment in human societies, something about behavioral game theory, and the conclusion from behavioral economics that trade establishes trust.
I will not pick on any of these specific claims (we could start with the bit about 40-50% heritability of human behavioral traits, just for fun, but that would be a whole different post), because I simply do not see Michael’s point. Nobody has argued, to my knowledge, that philosophy can do a better job than science at finding out facts about the world, at least not since Descartes (who thought of himself as a scientist, by the way). That would be like arguing that chemistry is better than history at figuring out things about the Roman empire. Moreover, the list seems aimed at establishing the idea that morality better be built on an understanding of human nature. Indeed. Aristotle was certainly convinced of that, the utilitarians better agree too, and even a stern rationalist moralist like Kant had to concede that “ought implies can,” which means that in order to talk about morality we need at least to see what is actually possible for us to do (i.e., facts about human affairs matter). So Shermer’s list is entirely irrelevant to the question at hand, quite apart from the fact that some of the “facts” he lists are questionable on scientific grounds.
Next we are treated to the following q&a exercise: Shermer poses us the question “What is the best form of governance for large modern human societies?” To which he immediately answers: “a liberal democracy with a market economy.” What is the evidence for such an answer? “Liberal democracies with market economies are more prosperous, more peaceful, and fairer than any other form of governance tried.”
Again, let’s stop and unpack. Assuming once more that the facts are correct, one could begin by inquiring about the concept of prosperity. How is this measured? Economically? By degree of access to health care and education? By measuring subjective happiness? Because all these criteria yield different answers and rank societies differently. And are we extending this analysis back in time? How far, and based on what data?
Liberal democracies (which, incidentally, is a term that has very different meanings when applied to, say, the United States or Sweden) very likely are more peaceful than countries with different systems of government. But “fairness” is a complicated — philosophical! — idea, which requires a sophisticated conceptual analysis before we can even begin to measure anything at all. And incidentally, why is (presumably, across the board) fairness a better criterion for human flourishing than, say, a guarantee to universal health care and education that comes out of a very skewed system of distribution of other goods? There are many subtle discussions to be had about fairness (for recent examples on this blog see here and here), and it is hard to see how those discussions could be carried out without philosophical engagement. More broadly, no philosopher since Heidegger has indulged in writings aimed at showing that tyranny is better than democracy (and even Heidegger was not exactly a representative sample). The interesting debate is about how to deal with contrasting values within the context of a liberal, multi-ethnic, democratic society. Again, it is not clear who exactly is the target of Michael’s criticism.
Shermer then goes on to add a market economy to the mix of his favorite ideologies, claiming that “it decreases violence and increases peace significantly” (hardly surprising, coming from a well known libertarian). Once more, without even going to question the empirical assertion, shouldn’t we at least admit that “market economy” is a highly heterogeneous category (think US vs China), and that some market economies decrease fairness, do not provide universal access to health care and education, lower workers’ wages, and overall negatively affect human flourishing? How should we rank our values in order to make sense of the data? How do the data by themselves establish a guide to which values we should hold? And why should we follow whatever the current science says, as opposed to having discussions about where we would like science and technology (and economics) themselves to go?
But in fact Michael and I very likely don’t disagree that much about this whole thing after all. At the end of his essay he says: “in addition to philosophical arguments, we can make a scientific case for liberal democracy and market economies ... in addition to philosophers, scientists should have a voice in determining human values and morals” [my italics]. Well, if the contribution of science to human well being is in addition to that of philosophy, how exactly is the current state of affairs problematic? As an idea it goes back to Aristotle’s inquires into human nature, and very few professional philosophers would argue that science (or “facts”) is irrelevant to what they do.
The problem when scientists and skeptics write about philosophy is that quite often they are simply not familiar with the pertinent literature. Somehow philosophy — probably because it is a broad “meta-discipline” whose purpose is to reflect on what other disciplines do — encourages this idea that one doesn’t need to read it before dismissing it (I’m looking at you, Sam Harris, Lawrence Krauss and Stephen Hawking). 
Let’s start with Shermer’s allegation that the is/ought is a “fallacy.” To begin with, here is exactly what Hume wrote about it, in the aptly entitled A Treatise of Human Nature (1739):
In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, 'tis necessary that it should be observed and explained; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.
Nowhere does Hume say that there is an in principle unbridgeable gap between is and ought. He is simply, very reasonably, pointing out that one cannot “imperceptibly” slide from one type of consideration to the other without providing explanations and reasons.
Second, Willard van Orman Quine, one of the most prominent philosophers of the 20th century, has done much work to argue that there cannot be a sharp distinction between matters of fact (what philosophers call synthetic truths) and statements that are independent of empirical findings (so-called analytical truths, like those of math and logic). Which implies that many modern philosophers wholeheartedly agree that of course science has something to say about values.
However, it also follows from Quine’s work that making the fact/value distinction permeable can result in a surprisingly uncomfortable outcome for scientistically minded individuals, biting them in the ass, so to speak. You see, negating a sharp distinction between facts and values does not mean that the latter reduce to the former, it means that there is a complex interplay between the two. From there it’s only a short step to realizing that facts themselves are not immune from value judgments and filtering, which in turn means that a “just the fact, ‘madam” sort of attitude is naive. It appears that science needs philosophy just as much as philosophy needs science, especially when it comes to clearly non-value neutral issues such as justice, fairness, and human well being.
Finally, Quine also pointed out a reason why science by itself is never going to be enough. All theories about the world are going to be underdetermined by the available data, meaning that there will always be more than one way to understand the meaning of “facts.” If this is the case, then we need extra-empirical considerations to make sense of those very facts (i.e., they don’t “speak for themselves”). Which is why careful reflection on meaning and logical implications (i.e., good philosophizing) will always be required.
Quine advocated for a strong “naturalistic” turn in philosophy, a stronger one than I would recommend, in fact (I'm writing a book about this...). But even his embracing of empiricism (and therefore science) still yielded a view of human knowledge as a complex web where facts and interpretations, provided by physical science, natural science, and social science, are going to be in reflective equilibrium with contributions from non-empirical investigations, be they from math, logic or, yes, philosophy.
The emerging picture, then, is much more nuanced and interesting than any simplistic dismissal of philosophy a la Harris-Krauss-Hawking. Shermer, to his credit, comes closer to this nuanced call for an inclusion of science — together with philosophy — in our quest to make sense of the world and to make it a better place for ourselves. But there is no reason to be worried about scientists conceding any high (or low) ground to philosophers. It has never happened, it never will, and it isn’t what philosophers are asking for anyway.
 Fortunately, there are skeptics out there who take a more nuanced approach to the philosophy-science-morality issue. A good example is represented by two essays keyboarded by Steve Novella at the NeuroLogica blog.