GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics

Update 2016-12-14: GiveWell’s 2016 cost-effectiveness analysis has updated the way it handles population ethics. It now explicitly takes the value of saving a 5-year old’s life as input and no longer assumes that it’s worth 36 life-years.

Update 2018-08-14: I recently revisited GiveWell’s 2018 cost-effectiveness analysis. Although the analysis spreadsheet no longer enforces the “GiveWell view” described in this essay, most GiveWell employees still implicitly adopt it. As a result, I believe GiveWell is still substantially mis-estimating the cost-effectiveness of the Against Malaria Foundation.

GiveWell claims that the Against Malaria Foundation (AMF) is about 10 times as cost-effective as GiveDirectly. This entails unusual claims about population ethics that I believe many people would reject, and according to other plausible views of population ethics, AMF looks less cost-effective than the other GiveWell top charities.

GiveWell’s Implicit Assumptions

A GiveWell-commissioned report suggests that population will hardly change as a result of AMF saving lives. GiveWell’s cost-effectiveness model for AMF assumes that saving one life creates about 35 quality-adjusted life years (QALYs), and uses this to assign a quantitative value to the benefits of saving a life. But if AMF causes populations to decline, that means it’s actually removing (human) QALYs from the world; so you can’t justify AMF’s purported cost-effectiveness by saying it creates more happy human life, because it doesn’t.

You could instead justify AMF’s life-saving effects by saying it’s inherently good to save a life, in which case GiveWell’s cost-effectiveness model shouldn’t interpret the value of lives saved in terms of QALYs created/destroyed, and should include a term for the inherent value of saving a life.

GiveWell claims that AMF is about 10 times more cost-effective than GiveDirectly, and GiveWell ranks AMF as its top charity partially on this basis (see “Summary of key considerations for top charities” in the linked article). This claim depends on the assumption that saving a life creates 35 QALYs.

To justify GiveWell’s cost-effectiveness analysis, you could say that it is good to cause existing people to live longer, but it is not bad to prevent people from existing. (Sean Conley of GiveWell says he and many other GiveWell staffers believe this.)

In particular, you’d have to assume that:

Continue reading
Posted on

On Priors

Part of a series on quantitative models for cause selection.

Introduction

One major reason that effective altruists disagree about which causes to support is that they have different opinions on how strong an evidence base an intervention should have. Previously, I wrote about how we can build a formal model to calculate expected value estimates for interventions. You start with a prior belief about how effective interventions tend to be, and then adjust your naive cost-effectiveness estimates based on the strength of the evidence behind them. If an intervention has stronger evidence behind it, you can be more confident that it’s better than your prior estimate.

For a model like this to be effective, we need to choose a good prior belief. We start with a prior probability distribution P where P(x) gives the probability that a randomly chosen intervention1 has utility x (for whatever metric of utility we’re using, e.g. lives saved). To determine the posterior expected value of an intervention, we combine this prior distribution with our evidence about how much good the distribution does.

For this to work, we need to know what the prior distribution looks like. In this essay, I attempt to determine what shape the prior distribution has and then estimate the values of its parameters.

Continue reading
Posted on

How Should a Large Donor Prioritize Cause Areas?

Introduction

The Open Philanthropy Project has made some grants that look substantially less impactful than some of its others, and some people have questioned the choice. I want to discuss some reasons why these sorts of grants might plausibly be a good idea, and why I ultimately disagree.

I believe Open Phil’s grants on criminal justice and land use reform are much less effective in expectation1 than its grants on animal advocacy and global catastrophic risks. This would naively suggest that Open Phil should spend all its resources on these more effective causes, and none on the less effective ones. (Alternatively, if you believe that the grants on US policy do much more good than the grants on global catastrophic risk, then perhaps Open Phil should focus exclusively on the former.) There are some reasons to question this, but I believe that the naive approach is correct in the end.

Why give grants in cause areas that look much less effective than others? Why give grants to lots of cause areas rather than just a few? Let’s look at some possible explanations for these questions.

Continue reading
Posted on

Preventing Human Extinction, Now With Numbers!

Part of a series on quantitative models for cause selection.

Introduction

Last time, I wrote about the most likely far future scenarios and how good they would probably be. But my last post wasn’t precise enough, so I’m updating it to present more quantitative evidence.

Particularly for determining the value of existential risk reduction, we need to approximate the probability of various far future scenarios to estimate how good the far future will be.

I’m going to ignore unknowns here–they obviously exist but I don’t know what they’ll look like (you know, because they’re unknowns), so I’ll assume they don’t change significantly the outcome in expectation.

Here are the scenarios I listed before and estimates of their likelihood, conditional on non-extinction:

*not mutually exclusive events

(Kind of hard to read; sorry, but I spent two hours trying to get flowcharts to work so this is gonna have to do. You can see the full-size image here or by clicking on the image.)

I explain my reasoning on how I arrived at these probabilities in my previous post. I didn’t explicitly give my probability estimates, but I explained most of the reasoning that led to the estimates I share here.

Some of the calculations I use make certain controversial assumptions about the moral value of non-human animals or computer simulations. I feel comfortable making these assumptions because I believe they are well-founded. At the same time, I recognize that a lot of people disagree, and if you use your own numbers in these calculations, you might get substantially different results.

Continue reading
Posted on

Expected Value Estimates You Can (Maybe) Take Literally

Part of a series on quantitative models for cause selection.

Alternate title: Excessive Pessimism About Far Future Causes

In my post on cause selection, I wrote that I was roughly indifferent between $1 to MIRI, $5 to The Humane League (THL), and $10 to AMF. I based my estimate for THL on the evidence and cost-effectiveness estimates for veg ads and leafleting. Our best estimates suggested that these are conservatively 10 times as cost-effective as malaria nets, but the evidence was fairly weak. Based on intuition, I decided to adjust this 10x difference down to 2x, but I didn’t have a strong justification for the choice.

Corporate outreach has a lower burden of proof (the causal chain is much clearer) and estimates suggest that it may be ten times more effective than ACE top charities’ aggregate activities1. So does that mean I should be indifferent between $5 to ACE top charities and $0.50 to corporate campaigns? Or perhaps even less, because the evidence for corporate campaigns is stronger? But I wouldn’t expect this 10x difference to make corporate campaigns look better than AI safety, so I can’t say both that corporate campaigns are ten times better than ACE top charities and also that AI safety is only five times better. My previous model, in which I took expected value estimates and adjusted them based on my intuition, was clearly inadequate. How do I resolve this? In general, how can we quantify the value of robust, moderately cost effective interventions against non-robust but (ostensibly) highly cost effective interventions?

To answer that question, we have to get more abstract.

Continue reading
Posted on

How Valuable Are GiveWell Research Analysts?

Update 2016-05-18: I no longer entirely agree with this post. In particular, I believe GiveWell employees are more replaceable than this post suggests. I may write about my updated beliefs in the future.

Edited 2016-03-11 because I’ve adjusted my estimate of the value of global poverty charities downward, which makes working at GiveWell look worse.

Edited 2016-03-11 to add a new section.

Edited 2016-02-16 to update the model based on feedback I’ve received. Temporal replaceability doesn’t apply so I was underestimating the value of research analysts.

Summary: The value of working as a research analyst1 at GiveWell is determined by:

  • Temporal replaceability of employees
  • How good you are relative to the counterfactual employee
  • How much good GiveWell money moved does relative to where you could donate earnings
    • A lot if you care most about global poverty, not as much if you care about other cause areas
  • How directly more employees translate into better recommendations and more money moved
    • This relationship looks strong for Open Phil and weak for GiveWell Classic

If you believe GiveWell top charities are the best place to donate, working at GiveWell is probably a really strong career option; if you believe other charities are substantially better (as I do) and you have good earning potential, earning to give is probably better.

Continue reading
Posted on

Are GiveWell Top Charities Too Speculative?

The common claim: Unlike more speculative interventions, GiveWell top charities have really strong evidence that they do good.

The problem: Thanks to flow-through effects, GiveWell top charities could be much better than they look or they could be actively harmful, and we have no idea how big their actual impact is or if it’s even net positive.

Continue reading
Posted on

More on REG's Room for More Funding

I have received some interest from a few people in donating to REG, and the main concern I’ve heard has been about whether REG could effectively use additional funding. I spent some more time learning about this. My broad conclusion is roughly the same as I wrote previously: REG can probably make good use of an additional $100,000 or so, and perhaps more but with less confidence.

Poker Market Saturation

Tobias from REG claims that about 70% of high-earning poker players have heard of REG, although many of those have had only limited engagement. He claims that they have had the most success convincing players to join through personal contact, and REG has not had contact with many of the players who have heard of it. This gives some reason to be optimistic that REG can expand substantially among high-earning poker players, although I would not be surprised if it started hitting rapidly diminishing returns once it grows to about 2x its current size.

To date, REG has not spent much effort on marketing to non-high-earning poker players. This field is much larger, but targeting lower-earning players should be less efficient because each individual player donates less money. To get a better sense of how important this is, I would have to know what the income distribution looks like for poker players, and getting this information is nontrivial.

REG would like to hire a new marketing person with experience in the poker world. They would probably be considerably better at marketing than any of the current REG employees. For this reason, additional funds to REG may actually be more effective than past funds, although this is difficult to predict in advance.

Continue reading
Posted on

Excessive Optimism About Far Future Causes

In my recent post on cause selection, I constructed a model where I broke down by category all the charities REG has raised money for and gave each category a weight based on how much good I thought it did. I put a weight of 1 on my favorite object-level charity (MIRI) and gave other categories weights proportional to that. I put GiveWell-recommended charities at a weight of 0.1–that means I’m about indifferent between a donation of $1 to MIRI and $10 to the Against Malaria Foundation (AMF).

Buck criticized my model, claiming that my top charity, MIRI, is more than ten times better than AMF and I’m being too conservative. But I believe that this degree of conservatism is appropriate, and a substantially larger ratio would be epistemically immodest.

Continue reading
Posted on

A Consciousness Decider Must Itself Be Conscious

Content note: Proofs involving computation and Turing machines. Whether you understand the halting problem is probably a good predictor of whether this post will make sense to you.

I use the terms “program” and “Turing machine” interchangeably.

Continue reading
Posted on

Page 7 of 9