Institute for Ethics and Emerging Technologies
IEET > Rights > Economic > Vision > Psychology > Bioculture > CyborgBuddha > Affiliate Scholar > John Danaher > Sociology > Philosophy

Print Email permalink (2) Comments (3103) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

Effective Altruism: A Taxonomy of Objections

John Danaher
By John Danaher
Philosophical Disquisitions

Posted: Jan 17, 2016

The effective altruism (EA) movement has been gaining quite a lot of notoriety recently. Although EA ideas have been common in academic circles for years, two major books have been published in the past year presenting the central tenets of the movement to the wider public. The first was from the movement’s godfather, Peter Singer, and was called The Most Good You Can Do. The second was from the movement’s precocious young figurehead Will MacAskill and was called Doing Good Better. MacAskill’s book in particular received widespread media coverage, no doubt in part fueled by the impressive resume of its young author.

I consider myself favourably disposed towards the EA movement. I have followed GiveWell’s donation recommendations in the past and will probably do so again in the future. But I am not uncritical of the movement. I find some of the advice unappealing (e.g. the ‘earning to give’ model); I worry about the evidential basis for some of the favoured causes; and I am sensitive to those who challenge its ideological presuppositions (even though I share many of those presuppositions).

That said, I am not well-educated on some of the ethical, methodological and practical aspects of the debate about EA. That’s why I welcome Iason Gabriel’s recent paper ‘What’s Wrong with Effective Altruism?’. It provides a detailed overview and critical engagement with some of the leading objections to EA. It is remarkably fair-minded. After having read it, I have no idea whether Gabriel supports the movement or not. That’s always a good sign in my books.

Anyway, as part of my general effort at self-education, I want to follow Gabriel by looking at each of these objections in some detail. This will take me several posts. Today, I simply lay the groundwork for this more detailed analysis by doing two things. First, looking at the defining characteristics of effective altruism. Second, providing a taxonomy of the leading objections to EA.

1. Thick and Thin Effective Altruism

As with all growing social movements, the meanings attached to the term ‘EA’ are becoming slightly more nebulous over time. Gabriel thinks it is important to distinguish between thin and thick definitions of EA. This is a common category-distinction technique in philosophy. The descriptor ‘thin’ is used to denote an austere and bare form of a particular doctrine. The descriptor ‘thick’ is used to denote a more contentful and abundant version (i.e. one that comes with a lot more commitments and conceptual baggage).

The thin and thick forms of EA can be defined in the following manner:

Thin Effective Altruism: The bare notion that one ought to do the most good that one can do through charitable donation.

Thick Effective Altruism: Thin effective altruism plus some additional commitments. Most commonly, these include a commitment to welfarism (i.e. the idea that individual well-being is the ultimate good), consequentialism (i.e. the idea that optimising good outcomes is the appropriate measure of effectiveness), and evidentialism (i.e. the idea that charitable interventions should be assessed using robust and reliable forms of evidence)

The thin form is quite vague and general. It is non-commital about the underlying moral theory one uses to determine what is good. It is, in principle, acceptable to many different schools of thought. But vagueness can be a vice when you are trying to develop a social movement. If too many people have different notions of what is good and what is effective then they may work at cross purposes. In this respect, thin EA reminds me of Thomas Aquinas’s famous first principle of practical reason: do good and avoid evil — a platitude if ever there was one. (Note: I appreciate that Thomist philosophy adds more flesh to these bare bones).

The thick form is more common in the literature. Certainly, if you read MacAskill’s book you will find a pretty clear endorsement of the three additional commitments mentioned in the definition of thick EA: welfarism, consequentialism and evidentialism. And, as I just suggested, it makes sense for EA to be articulated in this thicker form, not just for moral reasons, but also if the goal is to develop EA into a coherent mass movement.

It’s for this reason that Gabriel, in his treatment of the objections to EA, focuses solely on the thick version. I’ll be doing the same.

2. A Taxonomy of Objections

Okay, now that we have a clearer sense of what EA is all about we can proceed to consider the objections. These have proliferated over the years, but some have been particularly prominent in the less favourable reviews of MacAskill’s book. Gabriel suggests that they fall into three main branches: justice objections; methodological objections; and efficiency objections. Within each of these three branches there are three distinct forms of the objection, making for nine in total. This is illustrated in the diagram below.

I’ll go through each branch of this taxonomy in detail in future posts. For now, I’ll give a general overview, starting with the justice branch:

Justice Objections: The welfarist/consequentialist philosophy favoured by EAs renders the movement blind to important, justice-related values (i.e. values that affect how the benefits of charitable interventions get distributed and how they intersect with other moral concerns). There are three more specific forms of this objection:

Equality: EA only values equality in instrumental terms, i.e. it is not sensitive to the intrinsic good of equal distributions of welfare.

Priority: Although EAs purport to give priority to the poorest of the poor, they are unlikely to do this in practice. The desirability of a charitable intervention is, for EAs, always determined by its effectiveness (i.e. the marginal gain in welfare) and policies focusing on the worst off are unlikely to be the most effective according to this metric.

Rights: EAs are consequentialists and so do not give adequate weight to individual rights in their moral assessments. They may view the protection of rights as a convenient heuristic for achieving welfare gains, but they are willing to override rights when the consequences justify this. In short, EAs do not treat rights as trumps.

This brings us to the methodological branch. This may be the one I find most interesting:

Methodological Objections: The tools EAs use to assess and evaluate charities end up biasing them in unfavourable directions. Three of these biases are apparent in the work of EAs:

Materialism: EAs favour causes with tangible, quantifiable and easily measurable goals. This biases them away from causes with less tangible benefits. It may also bias them in favour of questionable, but evidentially tractable, metrics of evaluation (e.g. DALYs)

Individualism: EAs favour causes with benefits for individuals, leading them to ignore or undervalue community-level goods. But these should not be ignored as they form an important part of the good.

Instrumentalism: EAs are focused on achieving certain outcomes, not so much on the procedures that lead to those outcomes. This often leads them to favour technocratic as opposed to democratic interventions, on the grounds that the former are cleaner and more efficient than the latter. This ignores the value of democratic procedures and, possibly, contributes to bad governance in donee countries.

And then, finally, we have the efficiency branch, which is also pretty interesting to me as I find myself intuitively at odds with some of the advice given by EAs but unable to fully articulate my intuitive concerns:

Efficiency Objections: The advice given by prominent EA organisations is less robust and less efficient than they suppose. This is important because the EA movement has taken on the social responsibility of providing such advice to its adherents. Again, this objection comes in three main forms:

Counterfactuals: EAs often rely on counterfactuals when giving advice. For example, their earning to give advice is premised on questions like ‘what difference would it make if I worked with a charity and helped people directly versus working in finance and donating the money I make?’. But they are inconsistent in their treatment of counterfactuals, sometimes neglecting similar questions like ‘what difference would it make if I donated to one of GiveWell’s top charities versus making another donation?’

Motivation/Psychology: EAs have great faith in the power of reason and rationality to change how people behave, but they may be wrong to do so when the movement is so young and the existing adherents so self-selecting. This may affect their ambitions of creating a truly mass social movement.

Systemic Change: EAs are often quite reactionary in their advice: favouring neglected causes because of their marginal utility, but moving onto other interventions when a sufficient mass of people address the previously neglected cause. This leads them to ignore or misvalue the importance of systemic change in making the world a better place.

For what it’s worth, the last of these criticisms seems to be the most prominent in the past year or so. Many left-leaning critics of EA challenge the way in which the movement acquiesces with the global capitalist status quo. There has some interesting fightback on this by EAs.

I’m sure this brief overview of the leading criticisms is frustratingly vague. I have no doubt that some readers are longing for more detail. Never fear, I will provide it in later posts. If you cannot wait that long, I recommend reading Gabriel’s full paper.

John Danaher holds a PhD from University College Cork (Ireland) and is currently a lecturer in law at NUI Galway (Ireland). His research interests are eclectic, ranging broadly from philosophy of religion to legal theory, with particular interests in human enhancement and neuroethics. John blogs at You can follow him on twitter @JohnDanaher.
Print Email permalink (2) Comments (3104) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


The most obvious (for me) objection to EA does not seem to be present in the list.

This objection is that methods of making lots of money (or trying to) often exploit or even oppress others.  Examples include the “financial industry” (i.e., fraud, banksterism and tax-dodging), digital technology (foisting on society systems that subjugate users; see, and even Big Pharma (corruption of doctors, research, laws).

The harm Gates has done with Windows, and the harm Zuckerberg has done with Facebook, cannot be compensated by the donation of part of what they gained.

This is a fair point. Without commenting on particular individuals, I’d be worried about the earning to give advice and how the outcomes are being assessed.

But I think that criticism could be factored into the third category of objections, particularly under the counterfactual wing since it has to do with how EAs weigh the alternatives when giving advice: the value working in such an industry and donating to good causes versus working in a more benign industry and donating less money. It would probably be difficult to weigh the competing harms and benefits though, which gets at the methodological problems with EA (and consequentialist philosophies more generally).

YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Becoming the First Transhuman: A Call For The Right Stuff

Previous entry: Let’s design social media that drives real change


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Executive Director, Dr. James J. Hughes,
35 Harbor Point Blvd, #404, Boston, MA 02125-3242 USA
Email: director @