IEET > Rights > HealthLongevity > Economic > Vision > Staff > Hank Pellissier > Philosophy
Effective Altruism has Five Serious Flaws - Avoid It - Be a DIY Philanthropist Instead
Hank Pellissier   Jul 13, 2015   Ethical Technology  

In an earlier essay I recommended the Effective Altruism (EA) movement, the humanitarian crusade spearheaded by philosopher Peter Singer.

Today, I retract my support. Although EA’s core intention is morally commendable - donating “expendable income” to world-improving causes - there are multiple details in its strategy and organization that are sloppy, simplistic, ethically dubious and downright foolish.

It pains me to reach this conclusion. Here’s how it happened:

I happily devoured Singer’s initial book on the topic The Life You Can Save: How to Do Your Part to End World Poverty  Enthusiastically, I dug into his followup, The Most Good You Can Do: How Effective Altruism Is Changing Ideas About Living Ethically. About three chapters in, I had a few quibbles, but I was feeling so ethically-enamored I enrolled in Singer’s Princeton free Effective Altruism online course, plus I joined the Effective Altruism Bay Area Meetup Group - to deepen my EA education and commitment.

Diving into Princeton’s online coursework, I quickly read all Week One assignments. I was zooming along the straight-and-narrow path to to becoming a full-fledged EA disciple, but then… I meandered over to the Effective Altruism Forum introductory page that displayed 22 short essays; I read them all immediately with increasing dismay.

The “minor quibbles” I originally had with EA flared into major disappointment. I backtracked; I reversed; I rebelled. Now I am not going to the Meetup, I’m not advancing to Week Two in the online course, I’m tossing The Most Good You Can Do into my Kindle archive. I’m finished with EA.

Here’s the five faults I find with Effective Altruism:

1. EA’s Over-Reliance on Charity Navigator, Give Well, and other NPO Evaluators
2. EA’s Stance that “Earning High and Giving Big” is Superior
3. EA’s Too-High Consideration for Animal Rights
4. EA’s Weird, Wrong Alliance with MIRI (Machine Intelligence Research Institute)
5. There’s an Alternative to EA that’s Far Superior: I call it “DIY Philanthropy”

I need to disclose my activities now: I’m founder/director of a small nonprofit that raised $40,000 last year for orphanages, schools, and clinics in Uganda, The Philippines, and the Congo.

Let’s examine EA’s problems, one at a time:

FLAW #1: Over-Reliance on Charity Navigator, Give Well, and other NPO Evaluators

Singer and EA advise giving money only to organizations that donate 75% - 100% of their budget on services. Worthy groups - like Against Malaria Foundation - are determined by “evaluators” like Charity Navigator and GiveWell, who spend millions conducting their research.

There are two humungous flaws in this simplistic and elitist procedure:

1. The effectiveness of a charity’s service depends on more than the service percentile of its budget. If 60% of one NPO’s funds provides a greater good than 90% of another NPO’s budget that serves the same community - dollar for dollar - it’s more effective and ethical to fund the higher-quality service.

2. GiveWell and Charity Navigator both have a “Bigger is Better” bias that cripples grassroots charity startups. Well-deserving groups like mine that spend 100% of our income on services are off-the-radar of the evaluators because we’re too new and too small. Charity Navigator only recommends long-lived humanitarian behemoths that have total revenue of more than $1,000,000 annually, that have been in existence for seven years. 

EA’s recommendation that philanthropists fund only the Big Old Charities cripples small newbie groups like mine. My NPO has lost several potential donors who backed out when they couldn’t find our name listed on GiveWell or Charity Navigator. What EA has done is “corporatize” the humanitarian field; they’ve provided large budget NPO’s with a massive advantage over startup charities. EA is “centralizing” the giving field, enabling a select few groups to monopolize.

Do I believe little tiny humanitarian groups provide better services than the massive bureaucracies? YES. For real “effectiveness” in altruism I suggest seeking out solitary individuals or small contingents who are trying to improve your favorite concern.

Hannah Smith, for example, is a blonde UK do-gooder who helps destitute war orphans with PTSD in the Congo via her Congo Orphans Trust.  She trucks life-saving supplies into the dangerous region, to mud-and-wattle orphanages packed with hungry kids with haunted eyes.

Thousands of “DIY Humanitarians” like Hannah exist. They will answer your emails, they will thank you profusely, they will deliver 100% of whatever you give them because they do it because they care, not due to a salary. These people run small Indiegogo and GoFundMe campaigns, or maybe they just solicit their friends, communities, or churches. Peter Singer and the EA movement ignore and injure their potential to contribute by promoting Big NPOS only.

To learn more about “start-up” humanitarianism, I suggest reading A Path Appears by Nicolas Kristoff and Sheryl WuDunn.

FLAW #2: EA’s Stance that “Earning High and Giving Big” is Morally Superior

EA crudely seems to regard CASH DONATED as containing the highest moral value. The EA-promoted essay “To save the world, don’t get a job at a charity; go work on Wall Street” by William MacAskill says, “while researching ethical career(s)… I concluded that it’s in fact better to earn a lot of money and donate a good chunk of it…you’ll have made a much bigger difference.” Doing good deeds in your vocation, claims MacAskill, is probably inferior: “if you decide to work in the charity sector, you’re rather limited.”

His reasoning - supported by EA - proclaims it is ethically superior, for example, to take a $200,000 annual income job on Wall Street, and donate 50% to charity, than it is to, for example, teach High School Math in the inner city for $50,000 and donate $5,000 to charity. He’s wrong in this inhumane assessment, for two reasons:

1. The happiness of the individual funder is disregarded. Of course it is wonderful that the additional $95,000 gained is perhaps curing malaria, but its callous to suggest that everyone in the developed world is ethically required to to devote themselves to high-salaried occupations, that they might hate. The giver’s life and need for happiness also contains value. Mandating that developed-world people should labor for others in occupations that might make them miserable is self-righteous and unethical.

2. EA disregards the “human value” of an occupation. The math teacher is unable to donate $95,000 annually to charitable causes, but he is, every school day, conveying information on an important topic and serving as a role model and support for young adults. He is in a position to touch, change, and improve lives. Maybe he will inspire his students to quite drugs, leave gangs, go to college. Wall Street sharks aren’t doing that; they’re usually just helping the 1% maintain their privileged status. Is the math teacher’s contribution to helping humanity less than the Wall Streeter, as MacAskill asserts? No. I believe the math teacher’s contribution is greater.

FLAW #3: EA’s Too-High Consideration for Animal Rights

Yes, I’m an omnivore, and Yes, I’m a “Speciesist.” I’m a Humanitarian not an Animalitarian. My priority is helping humans first; I think its the ethical and sensible stance. I deplore factory farming but it is below human slavery and genocide on my list of concerns.

Singer/EA, not surprisingly, puts Animal Causes on the list of most-ethical concerns. Animal Rights NPOs even have their own evaluator.  I don’t support this position; I find it misguided. Hundreds of millions of dollars are already donated to beast-centric concerns that pamper orphan tigers, for example, so they can return to the wild and slay ungulates. I feel a wee bit of compassion for these creatures, but the vast majority of my empathy is reserved for homo sapiens who are hungry, diseased, and uneducated.

Truth is, I don’t think its ethical to donate thousands of dollars to furry-friendly organizations like Maddie’s Fund in San Francisco, where waiting-for-adoption felines recline on comfy furniture, licking their lips while watching song bird videos on their own television set. Meanwhile, right outside, homeless humans dig through dumpsters looking for slabs of cardboard to use as a mattress for the night.

Helping Humans First is central to my moral code. Singer elevates animals to a level that is unacceptable to me; this promotion detours money away from needy people. I find that crazy and shameful.

FLAW #4: EA’s Weird, Wrong Alliance with MIRI (Machine Intelligence Research Institute)

MIRI is a Berkeley-based research team that was previously-titled SIAI (Singularity Institute for Artificial Intelligence). MIRI has a history of arrogance and aggressiveness, justified in their minds, I suppose, by their opinion that the future of the world depends on their ability to help create Friendly AI. MIRI has the financial support of Peter Thiel, who is worth $2.2 billion on Forbes The Midas List. MIRI isn’t curing disease or helping the poor; it’s budget pays the salaries of its aloof, we’re-more-rational-than-you researchers. I’m dismayed that MIRI has infiltrated EA.

Two of the recommended introductory essays on the Effective Altruism organization site are written by MIRI members. Posted second, right under Singer’s preface article, is a math-wonky article by SIAI/MIRI founder Eliezar Yudkowsky.  Luke Muelhauser, MIRI’s recent Executive Director (who left last month to join GiveWell), wrote a let’s-set-the-agenda article further down the list, titled “Four Focus areas of effective altruism.”  He places MIRI in the third focus area.

MIRI/SIAI tried to “take over” the transhumanist group HumanityPlus 3.5 years ago, when four SIAI members ran for H+’s Board. SIAI ran a sordid, pushy, insulting campaign, bribing voters, accusing opponents of “racism”, deriding Board members as “freaky… bat-shit crazy [with] broken reasoning abilities.” MIRI failed in their attempt to colonize H+, but they’ve successfully wormed their way into the heart of EA.

A colleague of mine (who asked me not to disclose their identity) attended the 2014 EA Summit in San Francisco and afterwards was of the impression that: “MIRI and CFAR (Center for Applied Rationality) are essentially the “owners” of EA.  EA as a movement has already sold itself in deals to devils.” This is surely an exaggeration in international EA, but in the SF Bay Area.. MIRI’s presence within EA is uncomfortably strong.

FLAW #5: There’s an Alternative to EA that’s Far Superior: I call it “DIY Philanthropy”

Effective Altruism provides too much advice and too many judgmental opinions on who, how, or why to fund. This renders us passive because EA insists that it’s already done the research and ethical thinking for us.

Compassionate people don’t need Big Brother informing them what right or wrong, how to help others. EA is just an obstacle in the path of a far better activity: DIY Philanthropy.

I won’t provide your with lengthy instructions detailing how to accomplish this. being a DIY Human means figuring it out yourself. My only hint is: be a Hannah Smith. She wants to help war orphans in the Congo, so she helps them.

You don’t need Peter Singer and EA telling you how to be charitable.

Let your own brain and heart be your guide.


A Vox article that supports my POV can be found HERE

Another IEET essay on Effective Altruism can be found HERE

An essay on DIY Philanthropy can be found HERE

Hank Pellissier serves as IEET Managing Director and is an IEET Affiliate Scholar.


Hey Hank,

Thanks for the criticism of effective altruism, and thanks also for organising a the Transhuman Visions conferences last year, at which I saw a lot of excellent speakers and generally had a great time.

I think some of your criticism is fair, but also let me try to explain one part of it that I think is misguided: Peter Singer is not a singular authority on how to do good effectively, nor is Eliezer Yudkowsky, nor the essays that I compiled at the Effective Altruism Forum. If you don’t like what some groups are saying, then the charitable thing to do is to focus your feedback on those individuals, rather than the broader tent of people who for all of the criticism of their excessive earnestness or sanctimoniousness are fundamentally basically trying to do good things. Even if effective altruists have erred on some occasions so far, having a large and ever-more informed network of philanthropic and ambitious people seems like a worthwhile project.

Onto your specific criticisms:
1. Evaluators:
The criticism is that effective altruists are apply procedures that are elitist - with a Bigger is Better Bias - and overly simple, using Charity Navigator-styled admin-to-program ratios. Both of these criticisms land quite wide of the mark.

In the case of admin-to-program ratios, this is something that effective altruists have most harshly criticised. One essay from my collection that they frequently cite is Dan Pallotta’s, and he does not mince words:
> ‘Imagine coming out of a shoe store with a brand new pair of shoes full of holes, and whispering to your friends, “You wouldnʼt believe how low the overhead was on these shoes.” Thatʼs exactly what Americans are doing with hundreds of billions of annual charitable donations. We take huge pride in giving to charities with low overhead without knowing a damned thing about whether theyʼre any good at what they do.’
This was a success story, in the sense that Charity Navigator initially responded with hostility to the effective altruist movement (a far cry from effective altruists being overly reliant on it, as you suggest!), but then eventually coming around to a position more sympathetic to Pallotta and effective altruists, and co-authoring an open letter on the importance of effectiveness (a).

In the case of a ‘bigger is better’ bias, it’s true that GiveWell has focussed on projects that have a large funding gap. This makes some sense, given that their research is costly, and that if there are only so many charities that they can evaluate, they might as well target some of the bigger ones, although as their resources grow, they can become more comprehensive. They have also begun to perform research to help with the creation of small but scalable charities, in consultation with potential founders (b), giving a sign that ‘bigger is better’ is not a belief that GiveWell currently hold, whether or not it was one that they ever did.

2. EAs stance is that earning high and giving big is superior.
The criticism here is roughly that effective altruists think that big-earning big-donors are better people. This would be unsavoury if it was true, but gladly it’s not. Earning money in finance to give it away is not going to be for everyone. For those who genuinely do give away a large fraction of their earnings, many lives are saved, more than one imagines corporate finance could possibly do.
> “If someone joined Goldman and donated half of his earnings to Against Malaria Foundation, that would be about enough to save 100 lives per year (or more accurately, saving 4000 QALYs), plus likely have substantial positive flow-through effects1. For Earning to Give at Goldman to be net harmful, the marginal employee would need to be causing the death of a hundred people each year. This would mean that Goldman Sachs employees are several orders of magnitude more deadly than American service people in Iraq.” (c)
This means that working in finance is genuinely praiseworthy. We don’t have to feel threatened by that. Fair play to them, these kinds of people are making a seriously big difference in the world. But neither is it the only way for people to do good. Will MacAskill has gone ahead and said that the big-earning route is good for maybe 15% of people. Personal preferences and comparative advantages come into play, to paint a much more complex picture.

3. EA cares too much about animal rights
It’s true that Peter Singer cares a lot about animal rights, and more than me. Maybe you don’t, and that’s ok. It’s up to you to decide what good you want to do in the world, or what counts for you as effectiveness.

4. EA is too allied with MIRI
MIRI is not formally affiliated with effective altruism or represented on the board of GiveWell (d) (mostly investors and businesspeople) or the Centre for Effective Altruism (mostly philosophers from Oxford). I can’t possibly comment on past politics of MIRI and futurism, but what you describe has clearly not occurred with the effective altruist community. Beyond that, you report that MIRI believe that they own effective altruism in some way. Clearly, they do think that their altruistic work is highly effective, but this is not news. There’s also some kind of Bay-Area-centrism going on here. MIRI’s cultural reach is strongest in the Bay Area, whereas effective altruism has a stronger reach in Europe, Canada, the East Coast and Australia, which should be reassuring. It’s also not news that Eliezer is a charismatic, though polarising essayist, who has said some interesting things in his time—although many aspiring effective altruists would take a more adverse view than mine! MIRI/SIAI has met harsh criticism from Holden of GiveWell before. Basically, you can be seeking effective philanthropic opportunities, but not think that MIRI is one of them, and that’s simply fine.

5. It’s better to figure out philanthropy yourself.
You say:
> “My only hint is: be a Hannah Smith. She wants to help war orphans in the Congo, so she helps them.
I would wish Hannah luck but also I think it would be useful for her to be linked in with informed people with similar goals to her, so they could bounce ideas off one another about how each could do their jobs more effectively.”

You say “let your own brain and heart be your guide”. I’d say: don’t go it alone. Figuring out how to do good philanthropy is an enormous problem, so whether or not it’s from within this particular community, get support so that your goals are effectively realised.


Hi Ryan—

thanks for your response to my article

First of all I want to commend you on the great work that you do. Congratulations!

I have some tips so that your group doesn’t get criticized by people like me.

First of all, be careful what goes up on your website - especially on the Introductory Page.

People will perceive, sensibly, that this is how you want to represent your values to the public.

That Luke Mulhauser article that defines 4 Focuses and lists the evaluators as #2, MIRI as #3, and Animal Rights as #4 - that looks like a statement of EA’s positions. I have heard repeatedly that it is not, and that most EA’s disregard items #2, #3, and #4—but if that is the case, I suggest you remove Luke’s article and put up something else that more precisely expresses your goals.

Positioning Eleizar’s article directly beneath Peter Singer’s leadoff comments conveys the message that MIRI is extremely important to EA’s philosophy. If you don’t want to convey that message, move other messages up and move Eliezar’s down down down.

Same with the all the other controversial articles. I strongly suggest on an introductory page that you include only the articles that there’s a strong agreement within EA on, a near-consensus.

and finally, I of course totally disagree with your apparent stance that little charities have too-much-to-learn and can’t-really-help and really have-to-work-with-bigger-smarter-groups, etc etc otherwise they can not be effective, and you don’t recommend them, or evaluate them, etc etc.

I learned how to help very easily, so did Hannah Smith.

You are sending out a message that it is near-impossible to be a humanitarian on one’s own, so “don’t even try”

That is a disservice and it is inaccurate.

What I suggest is this:

Instead of continuing to promote the idea that DIY charitable giving is inefficient and impossible to do successfully, why don’t you post some Do-It-Yourself Advice? I’d be happy to write it.

That way, instead of disempowering would-be solo humanitarians, you could empower them.

thanks again


Ryan - continued…

We now have a world where there’s crowdfunding like Indiegogo and GoFundMe, there’s home 3D printers, and online tutorials that can teach anybody anything anywhere.

Wonderful ways to decentralize power and information.

Effective Altruism can do this too.

EA could enable individuals to launch charity start-ups by providing them with information - EA should do this, but you’re not -

Instead you’re sending out the message that… giving well is very difficult to do, too difficult for one person or a small group to do, so what you should do, really, is check in with our friends the evaluators, who will tell you who to give your money to.

This approach that EA is taking, is centralizing power, and disempowering individuals.

That is my main gripe with EA.

I’d be very happy if you altered your strategy in the ways I suggest.

Hey Hank,

Thanks for your comment.

Useful feedback about the introduction page. A lot of the articles are pretty polarising, and maybe too much so. I’ll think about this when I position them in the next version.

Re DIY charities, I think some people are going to make charities alone no matter what I tell them, and they’ll vary in whether they’re useful or not. My question is how can we better equip those charities to be better. If you know that $1000 can save 1 life by surgery for an AIDS-associated illness or you can save 10-20 lives by distributing condoms or educating at-risk groups, I know which charity I’d rather start. It seems better to do or read some research rather than going on instinct alone, which wouldn’t always tell you which of those things is better.

So my DIY advice would be for people to decide what they want to achieve, then analyse the options to consider which intervention will take donors’ dollars the furthest. And I think it would be foolish to throw away work that others have already done to answer that question!

Good luck.

Interesting article, thank you. Here are a few criticisms:

1) The effective altruism movement does not rely upon Charity Navigator, and, as Ryan Carey pointed out, certainly views admin-to-program ratios as a simplistic measure, like you. Also, I’d say that the charities recommended by meta-charities such as GiveWell and Giving What We Can are comparatively small.

2) As for earning to give, I think that the effective altruism movement generally recognises that personal circumstances massively influence which career one goes into, and I wouldn’t say that people think that people who earn to give are “better people”. As the title of Prof. Singer’s book suggests, it’s about the most good that *you* can do. If going into finance isn’t for you, for instance, then you’re probably not going to enjoy it and are probably going to give less as a result.

3) On nonhuman animals, I’d say that the focus on animal suffering of many effective altruists is entirely justified. Speciesism is, in my view, unjustifiable, and many effective altruists, guided by reason and evidence, see no reason to discount the suffering of other animals simply because they’re a different species. You might disagree, and some effective altruists disagree as well. Again, it’s about the most good you can do with your moral beliefs.

As you noted, Animal Charity Evaluators is the main effective altruist meta-charity for animal suffering. But, they mainly endorse charities which predominantly focus on ending factory farming, promoting veganism, or at least improving the conditions of the animals in factory farms and the meat industry in general. So, effective altruists probably wouldn’t donate to charities that pamper tigers or felines.

Moreover, donating to charities that promote vegetarianism and veganism may actually help humans in the long run, given that the meat industry is a major greenhouse gas emitter, that it is at least in part responsible for the spread of antibiotic resistant bacteria, and that 40% of the world’s grain, which could be fed to starving humans, is instead fed to the animals in the meat industry.

4) Again, it’s about the most good you can do with your beliefs. Personally, I don’t donate to MIRI, and many effective altruists don’t. There’s no necessity to donate to animal welfare organisations or MIRI.

5) It would be nice if we all had the time to evaluate which charities and causes are the most cost-effective, but we don’t. In science, we often rely on experts in other fields, and studies conducted by others, in order to make judgements. The massive field of philanthropy, I think, is no different.

Then again, one can, if one wishes, use evidence-based methods to evaluate charities, as GiveWell have acknowledged. They even have a “do-it-yourself” advice section on their website - which is what you’ve recommended in your reply to Ryan’s comment. (1) In the end, what largely matters in Effective Altruism is that one is supporting the causes that do the most good in the world.




As I understand it Effective Altruism is really just saying that it’s good for people to give more, and to think about the effectiveness of their giving. So should you eschew EAs, or should you join in and influence the community to be more diverse? I would love to have someone else around who doesn’t idolise earning-to-give, who eats meat, doesn’t care for Charity Navigator, has no interest in MIRI, and gives to lots of little-known charities!

Thanks Ryan and geniusofmozart and Sanjay for the additional info. I do hope someday soon EA can offer DIY info on their website.

The reason I’m DIY in charity is because I like physically being the “Giver” - 

The experience of just writing a check for a far-away concern is not satisfying to me -  I prefer the hands-on empathetic connection I feel in actually visiting those who need help, giving them clothes, food,  money, building them clinics, teaching their children.

There are - I believe - many people like me who feel numb just giving cash to a middleman, but they’d happily give the shirt off their back and all the money in their wallet to someone they meet who desperately needs it.

In our developed world of increasingly artificial experiences, I believe there’s a great need to sensorially connect in real time and real space with impoverished people who need our help and want to convey their appreciation. There’s a huge trend for example, in Volunteerism.

I’m hoping EA recognizes this need and its value.


I posted another IEET essay on Effective Altruism - much more positive - HERE

Hi Hank. I’ve been reading this newsletter and your writing for a while. This is my first comment. Wish me luck.

Others have responded critically to Flaw #2 already, but I want to elaborate a bit because I think this gets to the core of EA. I don’t agree that EA regards “CASH DONATED as containing the highest moral value”—at least, not without analysis. It’s been pointed out that certain life-saving measures or resources cost specific amounts. This makes it possible to quantify, to a certain extent, the amount of good wrought by dollars; hence EA’s concern with charity evaluation, fraught as that practice may be. I agree that an over-reliance on third-party charity evaluators and the metric of service percentile is problematic, but the inspiration behind it—to see that your dollars actually do some good—is laudable.

Is earning/giving more ethically superior to an altruistic lifestyle, hobby or occupation? It may be. I don’t believe a serious EA proponent would say so categorically, but the point of EA is to escape our biases and recognize when it is so. Our moral instincts are not perfect. We prefer to help those we know, those with moving stories, those who look like us or remind us of ourselves. We give to vanity causes and alma maters. We don’t give enough to strangers—especially strangers in the poorest parts of the world, whose lives are most profoundly affected (and often saved) by modest acts of giving.

Let’s consider your examples of the Wall Streeter and the inner-city math teacher. There’s a lot of speculation and moralization being smuggled in here. The teacher is someone who might “inspire his students to quit drugs, leave gangs, go to college”; the Wall Street guy is a “shark” who definitely won’t do any of that. The Wall Streeter sounds like he hates his job, but this possibility isn’t considered for the math teacher. I share these biases with you, but it’s important to recognize that they are biases. Some teachers suck and don’t inspire anything but a distaste for school, while inspiring people can be found in any field. I can personally confirm that at least one teacher I know hated her job. These hypotheticals don’t really tell us anything except what we’re likely to assume about these careers and the people that have them.

One problem here is that a comparison of moral choices mutates into a comparison of moral agents, and in both cases the comparison is simplistic. A guy working on Wall Street who makes six figures and gives half his income to a good cause—i.e., not his college’s football team—might be doing more for the world than a passionless teacher that can’t be fired. These are moral agents and we should consider their cases individually, rather than assume every teacher is more altruistic or contributing more than every banker. The ethical value of deeds/jobs/etc. isn’t incalculable in principle, even if it’s something we could never calculate in practice. The focus in EA on quantification and data is a bit like the drunk looking for his keys only under the streetlamp, but it’s a move in the right direction from uninformed intuition.

Re moral choices, I don’t think any serious EA subscriber dismisses individual happiness. If only for practical concerns—sustainability, depression, burnout—the happiness of the agent has to be considered. At any rate, I think the number of people that feel equally drawn to teaching high school math and working on Wall Street is vanishingly small. People have different talents and interests. They also feel the moral emotions to different degrees and in unique ways. Some feel a strong urge to help people directly, to be in the trenches. Others are disinclined to interact with people directly but nevertheless want to contribute in some way. Because of our evolved intuitions, the hands-on group will always appear to be the more altruistic one; but in some cases, distant-yet-generous actors might actually be helping more people or helping people more. The point of EA-style reasoning isn’t to counsel people away from service and into “heartless,” lucrative professions. It’s to correct for our biases. It’s to recognize that some aspects of morality are counterintuitive, and that there are more (and sometimes better) ways to do good in this world than the ones usually on offer.

I’m a strong supporter in Hank’s DIY model for giving.  But it’s not for everyone.  For those who are not inclined to go it alone, EA does provide a reasonable alternative.  EA doesn’t appeal to me personally because I have an anti-organization bias and am not a believer in evaluation studies (it is frequently too hard to quantify some of the variables).  With cash giving, however, a reasonable evaluation study might be possible.  EA prejudices toward large humanitarian organizations are also troubling.

I would also hope that a person wouldn’t work a high-paying job that she hated just so she could give more.  That seems like a waste of a life.

It seems like in an organization like EA it would be better to separate the humanitarian giving and the nonhuman animal issues and programs.

Hi Roger—IMO, you are the inspirational DIY Philanthropy example. What you do in Uganda and other nations is exactly what I think would make many charitable people happy, especially charitable transhumanists. You hand-deliver educational supplies, you build swing sets for playgrounds, you meet the people you help and establish long-term relationships with them, so you can bring them what they need next time,

Skot—I disagree primarily with info in your 2nd paragraph, and you “lose me” when you talk about recognizing “our biases.” Your approach is analytical, rational, etc etc. and I see charitable giving as an emotional, empathetic activity. I also disagree with your assertion that we need to “give to strangers.” I prefer the DIY approach where contributors meet the people they help.

I think, after hearing from EA enthusiasts, that EA works great for some people, but not for many others, including me.


Can you specify what you disagree with in my second paragraph? I’m not an expert on EA, and I hope to come to a better understanding of it through dialogues like this one.

As for rational giving vs. emotional giving, this is a contest that need not occur. If it’s empathy or some other emotion that drives you to give, that’s terrific. The value of the EA lens is that it informs emotion with relevant facts and figures. Paul Bloom gives the example——of giving money to child beggars that are part of a labor ring. That money is going to their boss, not to help the kids. It would be better to give money to something like Oxfam, but doing that doesn’t press the emotional, “I’m a good person” button. There may be no conflict between getting that emotional reward and doing real good in the work you’re doing, in which case I’d say you are being effectively altruistic. But for those who just thoughtlessly write checks to feel better about themselves, EA is a valuable corrective.

Re giving to strangers, I don’t mean that you can’t or shouldn’t meet the people you help. It’s just a sad fact of this world that most of the people privileged enough to be charitable are estranged from the people that need charity most. So if you’re trying to do the most good—either helping in person or donating remotely—you’re probably helping strangers.

Here’s another article, published quite recently, that offers the same criticism of MIRI’s influence on Effective Altruism:

One article you might find interesting is:

Everyone needs some level of fuzzies in order to remain motivated. Here the suggestion is that you optimise for both of them separately.

YOUR COMMENT Login or Register to post a comment.

Next entry: Conservatives Choose More Unwanted Pregnancy and Abortion over Better Birth Control for Teens

Previous entry: The Robot Lord Scenario