Effective altruism
First world trolley problems
June 6, 2017 — December 5, 2023
A disruptive-sounding branding of “empirical charity”. Largely utilitarian, with the strengths and weaknesses of utilitarianism.
1 Trade-offs
The central thing, AFAICT, is thinking about opportunity costs.
Bjørn Lomborg, for example, gives an eloquent justification of the need to account for opportunity costs (a dollar spent saving rich people from cancer is a dollar not spent saving poor people from malaria) but then makes IMO abysmal recommendations for optimality. Don’t get me started on his assessment of tail risk. We need a better Bjørn Lomborg. What is the opportunity cost of keeping this Bjørn Lomborg? The ROI on investing in better Bjørn Lomborgs?
This page mostly exists to bookmark resources various better Bjørn Lomborgs have littered around the internet.
2 Donating time
Consider 80000hours, a career-advice site, because I thought their considered analytic posturing was sweet if awkward, and it reminds me of over-earnest boyfriends, e.g.
Which would you choose from these two options?
- Prevent one person from suffering next year.
- Prevent 100 people from suffering (the same amount) 100 years from now.
Most people choose the second option. It’s a crude example, but it suggests that they value future generations.
If people didn’t want to leave a legacy to future generations, it would be hard to understand why we invest so much in science, create art, and preserve the wilderness.
We’d certainly choose the second option. […]
First, future generations matter, but they can’t vote, they can’t buy things, and they can’t stand up for their interests. This means our system neglects them; just look at what is happening with issues like climate change.
Second, their plight is abstract. We’re reminded of issues like global poverty and factory farming far more often. But we can’t so easily visualize suffering that will happen in the future. Future generations rely on our goodwill, and even that is hard to muster.
Third, there will probably be many more people alive in the future than there are today.
You might be entertained to discover that their top recommendations for problems to tackle at the time of writing were
- Risks from artificial intelligence
- Promoting effective altruism
- Global priorities research
Like many of the rationalist community projects, though, there is some novel, interesting DIY ethics in there. See also lesswrong etc, various extended trolley problems.
3 Low variance giving
The famous early recommendation attached a high value to certainty that any given donation would have a measurable effect. GiveWell is the poster child for this idea.
Here’s a question: Does anyone have a better version of this next quote? I want one which looks at the net costs of different modes of distribution instead of lumping everything into “capitalism” vs “other”.
Excerpted from Does marginalism in economics of effective altruism lead to self defeating behaviour?.
The core problem is the bourgeois moral philosophy that the movement rests upon. Effective Altruists abstract from—and thereby exonerate—the social dynamics constitutive of capitalism. […] capital’s commodification of necessities directly undermines the self-sufficiency of entire populations by determining how resources are allocated. […]
In the meantime, capital extracts around $2 trillion annually from “developing countries” through things like illicit financial flows, tax evasion, debt service, and trade policies advantageous to the global capitalist class. […]
These dynamics, which spring from capital’s insistence on the commodification of necessities, are what turn billions of people into drowning strangers and generate a need for ever-multiplying charitable organizations in the first place.
There is some cross-talk there (reducing extractive corruption is indeed something that effective altruists argue is a major concern of EA programs). But there is still a critique here; marginalist incrementalism can indeed lead to Molochian equilibria. Apparently this is called the institutional critique; I should track down a reference for that.
I am skeptical of incrementalist behaviour in EA myself, and in particular the preference for underwhelming changes with low-risk but unspectacular impact (subsidising mosquito net distribution), versus high-risk, structurally revolutionary changes (Taking land from the elites and giving it to peasants). There is an implicit risk aversion there, about which I have Opinions that I should muster. tl;dr risk-aversion is baked into this model. Decent decision theory would presumably allow us to favour high-risk, riskier bets whilst still telling us that some charities are inadmissible by any criteria, i.e. that were with high probability a waste of money regardless of risk appetite. More recent EA thought has addressed this; see below.
Also, I do like the fundamental EA insight that opportunity costs are important in charity; I would like to keep that around.
4 Hits-based giving
Portfolio-theory meets effective altruism. Take on a high-risk, high-return portfolio of bets. There is a lot to say here, but I am personally more sympathetic to this idea than I am to the (IMO inherently conservative) notion of trying to achieve low risk in donations at the cost of high return.
Anyway, see Open philanthropy on Hits-based Giving.
5 Tail-risk based giving
TBD. Hits based giving is about evaluating risky charities by estimated expected portfolio return. I hope to return here and talk about evaluating charities based on other criteria, such as tail-returns, which I would like to argue is substantively different.
6 Infinite self-criticism regress
7 (ultra-) Long-termism
It’s popular to declaim long-termism from outside the EA “community” and also to conflate it with the EA movement, and presents a strawman of long-termism as a kind of Pascal’s Mugging for the future. I generally regard the portfolio-theory and tail-risk based giving as a more interesting and more defensible version of long-termism than the usual strawman. TBD
8 Incoming
An interview with someone who left Effective Altruism | mathbabe
Vox has an EA beat: Global poverty, climate change: Future Perfect explores effective ways to fix the world’s biggest problems
The Esoteric Social Movement Behind This Cycle’s Most Expensive House Race
Mushtaq Khan on using institutional economics to predict effective government reforms
Zvi Mowshowitz, When Giving People Money Doesn’t Help. Troubling, needs follow-up.
Have The Effective Altruists And Rationalists Brainwashed Me?
Hanania, Effective Altruism Thinks You’re Hitler
Both approaches – soft pedaling the dismal view EA takes towards the moral and intellectual capabilities of most humans or owning it – come with potential risks and rewards. And I’m not even sure if being aware of this dilemma is a good thing. I suspect Scott made the best possible defence of EA when he pointed out that it saves more humans than preventing 9/11, etc., and the one that is likely to appeal to the most people. But I still think it rubs many the wrong way, and will make others hate the movement even more. […] People can read ACX, Peter Singer, and Derek Parfit by night, and wake up the next morning and go back to being conservatives and liberals.
Émile P Torres, Longtermism poses a real threat to humanity has a utilitarianism-is-weird-plus-looks-culty critique of longtermism:
When I was a longtermist, I didn’t think much about the potential dangers of this ideology. However, the more I studied utopian movements that became violent, the more I was struck by two ingredients at the heart of such movements. The first was – of course – a utopian vision of the future, which believers see as containing infinite, or at least astronomical, amounts of value. The second was a broadly “utilitarian” mode of moral reasoning, which is to say the kind of means-ends reasoning above. The ends can sometimes justify the means, especially when the ends are a magical world full of immortal beings awash in “surpassing bliss and delight”, to quote Bostrom’s 2020 “Letter from Utopia”.
Ari Schulman, Open Wallets, Empty Hearts — The New Atlantis is a critique.
They are mostly critiques that the community has made of itself from the inside, AFAICS, but bundled up into a package.