Giving to the Centre for Effective Altruism

Whether it's pushpin, poetry or neither, you can discuss it here.

Giving to the Centre for Effective Altruism

Postby Pat on 2012-05-26T01:00:00

The Centre for Effective Altruism, the parent organization of 80,000 Hours and Giving What We Can, is accepting donations:
CEA is the charity that funds Giving What We Can and 80,000 Hours. Both of these organisations have carried out cost-effectiveness research which suggests that funding their campaigning activities produces significantly more money for Giving What We Can’s top recommended charities (or other highly effective charities) than giving to these charities directly. The management of CEA cares deeply about cost-effectiveness, and would not accept funding if they believe it could be spent significantly better elsewhere.

I have a couple of reservations about giving to the CEA even if it can leverage your money to produce large amounts in donations. The GWWC pledge requires that the donations benefit humans in developing countries. It's likely that most of the money donated by 80k members would go to developing-world charities as well. If you think that such donations aren't terribly cost-effective, it may be better to give to your favorite animal-welfare/x-risk/transhumanist charity.

On the other hand, I've been surprised at the extent to which 80k has been open to "out-there" charities. I'd have expected them to promote a more moderate image, at least at first. Maybe the CEA could steer donors toward other charities over the long-term.

The CEA doesn't have much of a track record, so it might be better to wait for a year or so to see how they handle the transition from volunteer to paid labor. On the other hand, you wouldn't want the CEA to founder for lack of funding, and because it's a small organization, one donor could make a significant difference.

The people behind GWWC and 80k are smart and want to make a big difference. That's true of a lot of organizations, I guess.

Here's what GiveWell says are the most important factors in evaluating a charity:
- A “bird's-eye view” of a charity's activities: all of its programs and locations, along with how much funding is going to each.
- Meaningful and systematic (not anecdotal) evidence of impact.
- Information about the likely impact of additional donations, including both plans for expansion and “funding gap” analysis (projections of how much a charity could productively spend, and how much current expected revenue falls short of this figure).

What are your considerations for or against donating to the CEA?

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska

Re: Giving to the Centre for Effective Altruism

Postby Daniel Dorado on 2012-05-26T09:53:00

Pat wrote: The GWWC pledge requires that the donations benefit humans in developing countries. It's likely that most of the money donated by 80k members would go to developing-world charities as well. If you think that such donations aren't terribly cost-effective, it may be better to give to your favorite animal-welfare/x-risk/transhumanist charity.


I agree. I think it's good if some of the money goes to Vegan Outreach or The Humane League, but I would prefer to donate to these than to CEA.
User avatar
Daniel Dorado
 
Posts: 107
Joined: Fri Dec 25, 2009 8:35 pm
Location: Madrid (Spain)

Re: Giving to the Centre for Effective Altruism

Postby Arepo on 2012-05-26T11:32:00

I'm not sure how unbiased I can be here - full disclosure, I'm applying for CEA, have done a fair bit of voluntary work for them, and will continue to do so whether or not I get the job.

All that said, the reason I'm doing those things is I rate them as the best cause in the world at the moment. Some things they have going for them:
  • They're putting into practice ideas that to my knowledge have never really been tried (such as professional philanthropy, which wasn't really possible until the 20th century)
    A more quantitative approach to evaluation than Givewell - as much as I like the latter, their comments about 'not taking expected values estimates too seriously' seem to miss the point
    An explicit philosophy (at least in theory, ignoring the possibility of biases) that if they evaluate themselves and find they're counterproductive, they'll disband (or at least stop accepting funding)
    A healthy distance from the Less Wrong/singularitarian community, for which I have serious reservations about the underlying philosophy (rejection of hedonistic util) and underlying selection effects
    They ran a cost-benefit analysis of their impact before accepting funding, having initially been reluctant to do so, and conservatively it suggested they might
    They have a very open-minded (but critical and more-or-less utilitarian) view of what constitutes 'doing the most good'. Ie they're not single issue, nor committed to anything (they take the poor meat eater view seriously, for eg)
    I've met most of the core members, and they seem a) competent, b) committed to becoming more competent and c) to have a high proportion of utilitarians
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Giving to the Centre for Effective Altruism

Postby Daniel Dorado on 2012-05-26T13:33:00

Arepo wrote:
  • They're putting into practice ideas that to my knowledge have never really been tried (such as professional philanthropy, which wasn't really possible until the 20th century)
    A more quantitative approach to evaluation than Givewell - as much as I like the latter, their comments about 'not taking expected values estimates too seriously' seem to miss the point
    An explicit philosophy (at least in theory, ignoring the possibility of biases) that if they evaluate themselves and find they're counterproductive, they'll disband (or at least stop accepting funding)
    A healthy distance from the Less Wrong/singularitarian community, for which I have serious reservations about the underlying philosophy (rejection of hedonistic util) and underlying selection effects
    They ran a cost-benefit analysis of their impact before accepting funding, having initially been reluctant to do so, and conservatively it suggested they might
    They have a very open-minded (but critical and more-or-less utilitarian) view of what constitutes 'doing the most good'. Ie they're not single issue, nor committed to anything (they take the poor meat eater view seriously, for eg)
    I've met most of the core members, and they seem a) competent, b) committed to becoming more competent and c) to have a high proportion of utilitarians
All this is great.

I'm very interested in their view of the poor meat eater problem. What do they think about? How can they think that donations must be affected by that problem?
User avatar
Daniel Dorado
 
Posts: 107
Joined: Fri Dec 25, 2009 8:35 pm
Location: Madrid (Spain)

Re: Giving to the Centre for Effective Altruism

Postby Arepo on 2012-05-28T10:33:00

As far as I know, they haven't put much research into it yet (much of the group is essentially on hold at the moment, since many of the core members are in their final year of university, in the middle of their final exams), but I know many of them think it's a big issue. In a recent seminar aimed at new members, they raised it as the most salient example of a problem that might show they'd been actively harmful.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Giving to the Centre for Effective Altruism

Postby Brian Tomasik on 2012-05-28T12:21:00

Daniel Dorado wrote:
Pat wrote: The GWWC pledge requires that the donations benefit humans in developing countries. It's likely that most of the money donated by 80k members would go to developing-world charities as well. If you think that such donations aren't terribly cost-effective, it may be better to give to your favorite animal-welfare/x-risk/transhumanist charity.

I agree. I think it's good if some of the money goes to Vegan Outreach or The Humane League, but I would prefer to donate to these than to CEA.

I agree with both of you guys. This is partly because I know CEA will have some funding even without our donations. It's also partly because CEA will likely lead to increased funding for reducing x risk, about which I have reservations. That said, I love participating in the organization and am friends with many of the members, so of course I'm going to support them in spirit. ;)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Giving to the Centre for Effective Altruism

Postby Pat on 2012-05-29T22:41:00

The CEA seems to be a better buy (perhaps many times better) than the Against Malaria Foundation, especially for those who believe that global poverty should is the most cost-effective cause. It may even generate more money for animal-welfare or existential-risk–reduction charities than giving directly to those charities would. Presumably, though, there's a limit on how much money the CEA can effectively absorb in the short-term. Beyond this point, it would be better to donate directly to the charities you want money to go to.

What's interesting is that this point is different for people who are committed to developing-world health than to those committed to other causes. This is because more of the donations generated by the CEA will go developing-world health than to other causes. So developing-world–health people should be willing to fund the CEA to a level higher than others would.

The CEA-funding issue sort of a game-theoretic situation because people who believe in different causes will want the CEA's funding to come out of other causes' budgets, not those of their own. So even if you want the CEA to be funded, it could be a game of chicken to see who will end up doing it. People who believe in more-marginal causes have the advantage because CEA donations are more cost-effective for those giving to developing-world health.

I guess the question then is, why aren't the developing-world healthers donating more to the CEA? I imagine that part of the problem is, ironically, due to one of the biases that the CEA is trying to expunge: that it's better to do good directly than indirectly.

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska

Re: Giving to the Centre for Effective Altruism

Postby Arepo on 2012-05-30T14:12:00

Pat wrote:Presumably, though, there's a limit on how much money the CEA can effectively absorb in the short-term. Beyond this point, it would be better to donate directly to the charities you want money to go to.


Aye. I think the limit is relatively high, (in the sense that it's unlikely to matter to the median marginal funder) though. From a casual discussion with Will Crouch, he seemed to think they could easily get through £1 million, though I don't know how much experience they'd have in dealing with that large a lump sum in practice. That said, I've just written a long email arguing that the salaries they're offering are counterproductively low (I might post it as a blog post/series of blog posts though, in which case I'll try and remember to link it it from here - or I can forward it to anyone as is if they're interested), so I suspect they could do with even more than they think they can.

If my argument comes too late/doesn't persuade the relevant people, then we might end up in a position where those who think my argument is sound should specifically fund CEA employees rather than the org itself, which would be a bit weird...
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Giving to the Centre for Effective Altruism

Postby Arepo on 2012-05-30T14:18:00

Re the game theory stuff, I don't think it's an issue. A large part of CEA's goals is identifying the best causes in the world and moving/generating resources for them, so if you have a vested interest in some specific issue it would be irrational to donate to them anyway. If you support their goal (and think they offer the best way of reaching it), then you should be willing to donate regardless of your disposition toward existing causes, since if your disposition is misguided you'll want it to be corrected.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Giving to the Centre for Effective Altruism

Postby Arepo on 2012-05-31T14:11:00

Arepo wrote:That said, I've just written a long email arguing that the salaries they're offering are counterproductively low (I might post it as a blog post/series of blog posts though, in which case I'll try and remember to link it it from here - or I can forward it to anyone as is if they're interested), so I suspect they could do with even more than they think they can.


Update: they've asked me not to put the argument on the blog.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Giving to the Centre for Effective Altruism

Postby Pat on 2012-05-31T17:00:00

I agree that the game-theory stuff probably isn't a good model of how people will actually behave. It made more sense when I was writing it. If GWWC and 80k members were rational and fully informed, what I said earlier might apply. But it could be the case that only a minority of members would consider donating to the CEA.

Arepo wrote:A large part of CEA's goals is identifying the best causes in the world and moving/generating resources for them, so if you have a vested interest in some specific issue it would be irrational to donate to them anyway.

Hmm, I was assuming that donating a dollar to the CEA could be expected to generate more than a dollar for each of a variety of causes. Even if the CEA decreed that a certain cause is the best one, I doubt whether most GWWC and 80k members would give exclusively to it. Members may agree that they want to do good, but they have differing views about what that means. As long as getting people to give more remains part of the CEA's mission, multiple causes will benefit.

Arepo wrote:If you support their goal (and think they offer the best way of reaching it), then you should be willing to donate regardless of your disposition toward existing causes, since if your disposition is misguided you'll want it to be corrected.

That's an appealing argument. There's disagreement about what charities or causes to support, so we should instead pay for people to figure it out.

There are probably some constraints on the CEA's message. I doubt whether the CEA would promote the idea that promoting developing-world health is harmful or that existential risk is good. I don't know if those are true, I'm just saying. The charity-effectiveness research that GiveWell has done was a lot of work. If the scope of the research extends to include any cause, the difficulties are compounded. It's relatively easy to judge the effectiveness of health interventions. But what about about activities whose value lies chiefly in changing the likelihood of low-probability events many decades or centuries in the future? In addition, philosophical disagreements start to matter a lot.

A million dollars seems like a lot of money. If it is true that the CEA could use that much money effectively, the threshold beyond which it would be more effective to donate directly to the most effective causes would be unlikely to be surpassed. And if that's the case, it would make sense for people who favor causes other than developing-world health to donate money to the CEA, as long as they don't think its activities could have negative effects (e.g., increasing meat consumption among the poor or decreasing existential risk).

But a million dollars (a year?) seems like an awful lot of money for an organization that's so far gotten by on a few thousand.

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska

Re: Giving to the Centre for Effective Altruism

Postby Brian Tomasik on 2012-06-02T03:19:00

Pat wrote:
Arepo wrote:If you support their goal (and think they offer the best way of reaching it), then you should be willing to donate regardless of your disposition toward existing causes, since if your disposition is misguided you'll want it to be corrected.

That's an appealing argument. [...]

There are probably some constraints on the CEA's message. I doubt whether the CEA would promote the idea that promoting developing-world health is harmful or that existential risk is good. I don't know if those are true, I'm just saying.

Agreed. The problem for people like me who have more marginal values is that I can't trust others to get the right answer just based on more research. In fact, I agree with most of my friends on most factual issues. Where I differ is on the tradeoff between suffering and happiness. It's good that I don't disagree much on facts, because if I did, my friends and I should update our beliefs based on each other's views. But this does not happen for questions of value, where there are no right answers.

Pat wrote:A million dollars seems like a lot of money. If it is true that the CEA could use that much money effectively, the threshold beyond which it would be more effective to donate directly to the most effective causes would be unlikely to be surpassed. And if that's the case, it would make sense for people who favor causes other than developing-world health to donate money to the CEA, as long as they don't think its activities could have negative effects (e.g., increasing meat consumption among the poor or decreasing existential risk).

I'm sorry, I didn't understand any of that. :) Why does a higher budget for CEA make its marginal donations more effective? And why should people who favor obscure causes want to fund CEA more? Each CEA dollar gets turned into way more health dollars than animal dollars. And in that last sentence I think you meant "increasing existential risk," although I wouldn't necessarily object to saying "decreasing." ;)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Giving to the Centre for Effective Altruism

Postby Pat on 2012-06-02T20:30:00

Sorry about that. It made sense to me… :)
Alan Dawrst wrote:Why does a higher budget for CEA make its marginal donations more effective?

I didn't mean a higher budget per se, but a higher limit to the amount of money that the CEA could use effectively. By "effectively," I mean that giving to the CEA does more good than giving to any other charity.

The CEA will get a certain amount of money from people who would otherwise have donated to developing-world health, and it may receive money from foundations as well. If the CEA could spend only a small amount of money effectively, these sources could fund the CEA and there would be no need for me to contribute to it. I could instead donate to whatever cause I think is most effective.

If the CEA could spend a million dollars a year effectively, it is unlikely to obtain that level of funding. I guess the basic point I was trying to make was that it makes sense to give to the CEA only if it has room for more funding.

Of course, a charity's assertion that it can use a certain amount of money effectively might be interpreted as only weak evidence that it can actually do so.
Alan Dawrst wrote:And why should people who favor obscure causes want to fund CEA more? Each CEA dollar gets turned into way more health dollars than animal dollars.

I meant that people who favor obscure causes should be more willing to give to the CEA if its funding needs (from the perspective of those donors) are unlikely to be met by people who currently give to conventional causes.

Why would the conventional-cause people not donate to the CEA, even if doing so would increase the amount of money going to conventional causes? Maybe it's because they don't realize that the CEA is accepting donations, or because the CEA hasn't been vetted by an outside group, or because it doesn't have a track record of dealing with a substantial budget, or because donating to the CEA a step further removed from saving lives, or because of habit.
And in that last sentence I think you meant "increasing existential risk," although I wouldn't necessarily object to saying "decreasing."

I actually meant "decreasing." :) I meant that what I said earlier applies only if you believe that donations to causes other than the ones you're interested in have neutral value. If you instead believe that certain causes that the CEA will aid have negative value (as you do), then my analysis is moot. The increased donations to animal causes could be offset by increased donations to existential-risk reduction, as you mentioned earlier.

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska

Re: Giving to the Centre for Effective Altruism

Postby Brian Tomasik on 2012-06-03T18:29:00

Pat wrote:I didn't mean a higher budget per se, but a higher limit to the amount of money that the CEA could use effectively.

Aha, got it. You were talking about how high the ceiling is, rather than how tall the person is who's trying to stand up in the room.

Pat wrote:I meant that people who favor obscure causes should be more willing to give to the CEA if its funding needs (from the perspective of those donors) are unlikely to be met by people who currently give to conventional causes.

Makes sense now.

Pat wrote:The increased donations to animal causes could be offset by increased donations to existential-risk reduction, as you mentioned earlier.

Cool.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Giving to the Centre for Effective Altruism

Postby wdcrouch on 2012-06-04T19:22:00

Hey,

Disclaimer: I'm Managing Director and co-founder of GWWC, and President and founder of 80k. I'll try to write my personal view of the issue - inevitably though I'd expect to be ineliminably biased towards the organisations I've created. So do bear that in mind.

1. I think that the arguments for donating on the margin to meta-charity in general are extremely strong. The reason simply being that the rate of return one can get on charitable fundraising far outstrips that of private investment; and that effective altruism and optimal philanthropy seem to have particularly good rates of return (it's an idea whose time has come, but is not yet anywhere near saturation point).

GiveWell would be one such giving opportunity. But they have no room for more funding.

This might also happen for GWWC or 80k. Which is why, at the moment, we're primarily trying to find out how much people would be willing to donate, *conditional on us being able to spend the money well*. If we hit our room for funding, we'd tell our donors. But we're not yet confident even that we will be able to take on the people who we are desperate to hire.

I'll also clarify a couple of things:
CEA is the charity that funds Giving What We Can and 80,000 Hours.


I didn't actually know it was written like that on the website. It's slightly misleading (and have asked for it to be changed). CEA is the legally recognised charitable company that comprises GWWC and 80,000 hours. But donations can go direct to GWWC or to 80k.

From a casual discussion with Will Crouch, he seemed to think they could easily get through £1 million
I don't remember saying that. I think that over the next year we could spend $190 000 very well (that's a safe estimate of 5 full-time staff, plus overheads (and perhaps some secretarial assistance)). I wouldn't want to take on more than $250 000, as diminishing returns would really start to kick in quickly; unless we thought of a good way of spending money that wasn't staff.

Cause-dependence:

If global poverty alleviation, then GWWC. My inside view opinion is that 80k will considerably outstrip GWWC in terms of cost-effectiveness. But, once I take an outside view, and take overconfidence into account by considering that GWWC has a larger body of evidence behind it than 80k, and has had truly remarkable success so far, I reckon I can't stand by that view right now. So I'd say that that the dollar in / expected dollar to global poverty out ratio is highest for GWWC.

If existential risk reduction, then 80k. Of a sample of 27 members, 34% of pledged donations to existential risk mitigation. Insofar as new effective altruists tend to start off concerned about global poverty or animal suffering and then hear, take seriously, and often are convinced by the arguments for existential risk mitigation, I'd expect the proportion to remain significant (20% or more).

If improving animal welfare is your thing (and only your thing), I really don't think I can recommend either. Only a small proportion of 80k donations are pledged towards animal charities. And there are rich meat eater worries related to donations to the developing world. Though there might still be arguments in favour of 80k, I wouldn't think it a safe bet.

If you are highly uncertain, then 80k. This is my personal position (and I've just written a paper (http://oxford.academia.edu/WilliamCrouc ... ion_Draft_) on why you should be in this position too, and what to do if you are)! [Note for Alan, who won't buy the argument: you should be meta-ethically uncertain too!] The reason being that I would expect the average cost-effectiveness of donations from the EA community as a whole (a bunch of very smart, well-motivated and well-informed people) to be at least as good as my personal best guess, and if I can get a huge return on my investment while more evidence comes in, then so much the better!




- Track record.

If we were going to spend the money on something other than staff, I'd think that would be a major worry. But the large majority of our expenditure will be on staff. And, though we don't have a record of taking paid donations or hiring paid staff, we do have a record of having dedicated volunteers, working 20+ hours a week on the projects. Full-time staff can only increase the efficiency of those hours, as there are quite a lot of inefficiencies in having to rely solely on volunteers: i) total person-hours are spread among a large number of people, and so communication is slower, and miscommunication easier; ii) there is a greater management overhead; iii) there aren't the same incentive systems as with workers, and so tasks often are done slower and there is less guarantee of quality; iv) there's a high turnover as student volunteers move onto other things; and iv) there are sustainability worries as core volunteers (the directors and managers) find their time taken up with other things. Taking paid staff also means that we can have more volunteers (as we've got more people to recruit and manage them), so you're not just buying 40+ hours a week; you're also buying substantial volunteer time on top of that.


Hmm, I was assuming that donating a dollar to the CEA could be expected to generate more than a dollar for each of a variety of causes. Even if the CEA decreed that a certain cause is the best one, I doubt whether most GWWC and 80k members would give exclusively to it. Members may agree that they want to do good, but they have differing views about what that means. As long as getting people to give more remains part of the CEA's mission, multiple causes will benefit.
That's right.

wdcrouch
 
Posts: 2
Joined: Sun Feb 26, 2012 3:31 pm

Re: Giving to the Centre for Effective Altruism

Postby Brian Tomasik on 2012-06-05T08:50:00

Thanks for the info, Will!
wdcrouch wrote: (and I've just written a paper (http://oxford.academia.edu/WilliamCrouc ... ion_Draft_) on why you should be in this position too, and what to do if you are)! [Note for Alan, who won't buy the argument: you should be meta-ethically uncertain too!]
Looks like an interesting paper (as one would expect given the author). Yes, even emotivists care some about metaethical uncertainty, although not in the same way or to the same degree as moral realists.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Giving to the Centre for Effective Altruism

Postby Pat on 2012-06-06T00:53:00

Thanks for the paper, Will! I'm not especially confident in my current beliefs about ethics. With a few caveats, taking into account what informed people think is probably a good idea.

Here's an argument from the paper that we should work to reduce existential risk even if we are pretty sure extinction is a good thing:
Broome argues that, because of the good arguments in favour of there only being one neutral level – that is, one level of wellbeing at which it is neither good nor bad for there to exist an additional person - human extinction would be either extremely bad (because the value of additional humans in the future would be positive, and because there would have been very many of them), or extremely good (because the value of additional humans in the future would be negative, and because there would have been very many of them). Let’s grant Broome those claims. Broome thinks that what follows from this is that it is unclear what we ought to do with respect to the risk of human extinction.

In expected choice-worthiness terms, however, this isn’t correct. Provided that the decision-maker has choice over the matter at a later date, current human extinction would be extremely bad, in terms of expected choice-worthiness, even if the high neutral level view were correct, such that the extinction of the human race would actually be a very good thing, and even if our current moral evidence points in that direction .

The reasons for this are first because letting the human race go extinct is by its nature an irreversible decision, whereas preserving the human ra ce is not, and, second, because if we wait a few centuries before making such a decision, we will have better moral evidence at the time of the decision. This gives us an extremely strong reason in favour of delaying such a decision.

It seems extremely unlikely to me that humanity will consciously decide to go extinct. If all the world's philosophers convene and put together a consensus statement that the continued existence of humanity is bad, the masses will have a good chuckle and then go back to watching reality TV. And if people did what philosophers told them to do, the world would be in less of a mess, and the survival of humanity would (it seems to me) offer substantially more upside than downside.

Reducing the risk of extinction beyond a certain point (e.g., by colonizing other space) will ensure humanity's survival nearly as much as extinction would ensure its demise. So I'm not sure that giving to 80k is the obvious path if you're uncertain, since I don't see donating to 80k as a way of buying more time for us to figure out whether extinction is good. Instead, donating would make sense only if you thought that reducing existential risk is a good thing. I'm not sure whether it is, but I haven't researched the matter much. There are both utopian and dystopian scenarios, and it's not clear to me which are more likely.

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska

Re: Giving to the Centre for Effective Altruism

Postby Brian Tomasik on 2012-06-06T04:08:00

Pat wrote:
In expected choice-worthiness terms, however, this isn’t correct. Provided that the decision-maker has choice over the matter at a later date, current human extinction would be extremely bad


As Pat suggests, the bolded part is the rub. :)

LadyMorgana and I had a discussion about this last fall. Here's from the original thread:
Alan Dawrst wrote:
LadyMorgana wrote:But then where do you stop? Even if you're 99% sure that the world has 1 000 000 x more suffering than happiness, the argument still stands that you should wait to get more knowledge about the issue because the potential losses are so great if you're wrong and do blow up the world (an untold number of future positive lives lost).

As we've discussed, my concern with the knowledge argument is about whose knowledge it is. If we're talking about, say, my extrapolated self, then I think this would be right (up to a point). But
  1. most humans disagree with me about values, some of them very drastically (like Christian / Islamic fundamentalists), and
  2. it seems to me unlikely that human values will even control the future of intelligence on earth. In other words, reducing existential risk has the primary effect of increasing the chance of a non-human-like organism taking over our galaxy.
Either way, we're left unable to act upon our knowledge, even if we have time to acquire it.

To put it another way, reducing existential risk has this calculation:
(A) (99%) * (some massive outcome we can't control) + (1%) * (we can act upon the knowledge we gain from waiting).

Since we have bigger influence over things now, the calculation might be more like
(B) (90%) * (some massive outcome we can't control) + (10%) * (we can act upon the knowledge we have now).

In order for (A) to be bigger than (B), it would (roughly) have to be the case that the action we can control in (A) is 10 times better than the one we would take in (B). [Note: I haven't thought about this for more than 5 minutes, so I'm not sure my equations make total sense.]


Pat wrote:Reducing the risk of extinction beyond a certain point (e.g., by colonizing other space) will ensure humanity's survival nearly as much as extinction would ensure its demise.

Yes. It's like blowing the seeds off of a dandelion on a windy day and then hoping you can go pick them all up without losing any.

Pat wrote:There are both utopian and dystopian scenarios, and it's not clear to me which are more likely.

Me too.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Giving to the Centre for Effective Altruism

Postby Brian Tomasik on 2012-06-06T04:21:00

I had a few questions on Will's paper. I've only skimmed ~1/3 of it at various parts, so I apologize if these have been answered already. :)

Could it be actually wrong to be beneficent to drowning children if we took Ayn Rand seriously? (Note: I do not. :)) What probability should we assign to Objectivism? How likely is is that Rev. John Furniss was correct that child sinners should be tortured for eternity?
You are going to see again the child about which you read in the Terrible Judgement, that it was condemned to hell. See! It is a pitiful sight. The little child is in this red hot oven. Hear how it screams to come out. See how it turns and twists itself about in the fire. It beats its head against the roof of the oven. It stamps its little feet on the floor of the oven. You can see on the face of this little child what you see on the faces of all in hell - despair, desperate and horrible! (source)
More generally, how do we go about deciding these probabilities? It can't be based just on our intuitions, because the paper argues that we're biased to be overconfident in our own probability estimates. Rather, we need to use a modesty argument. But how do we know how much modesty to use and with whom? If we counted all humans alive today equally, then Catholics would get unduly high weight because they happen to use less birth control. And it seems that recently deceased people should count too. But how far back do we go? Back to the Pleistocene?

What about likely future people? What about animals with rudimentary moral views, or even just implicit moral views based on their desire not to suffer? What about alien civilizations? Paperclippers? Pebble sorters? Suffering-maximizing minds? What probability should we assign that sadists are a small group of truly enlightened thinkers who see past the stupidity of altruism?

These aren't necessarily refutations of the expected choice-worthiness framework. I'm just genuinely curious how these issues would be resolved. Several of these questions also arise in the epistemic modesty argument, although in that case, the framework seems to me more clear: Other people's beliefs are just evidence, no different from the result of a blood test. You adjust your hypotheses based on which ones would make it more likely that you see other people believing what they do. What's the corresponding overarching framework for adjudicating moral probabilities? Is it the same idea?

----

Further comments from a Facebook discussion, 31 May 2013.

As far as the point about moral uncertainty, the most accurate way to explain my brain's reaction is the following (similar to what I said above).

Yes, it might be that I'm mistaken about introspection and what I would come to believe upon thinking and learning more.

There are many cases where I am genuinely curious to explore other ideas and give them some weight. For example:
* What types of computations are conscious?
* Should the badness of suffering depend on brain size?
* Can any amount of happiness outweigh a day in hell?

However, there are many other cases where I'm not interested in giving weight to other views, and in fact, if my future self changed his mind on these matters, I would regard that as a failure of goal preservation rather than a triumph of enlightenment. For example:
* Safe homosexuality, masturbation, and incest are wrong.
* Organisms now matter more than organisms later.
* It's good to torture kids for eternity when they don't obey religious rules. (viewtopic.php?t=614#p5516)
Each of these beliefs is held by huge numbers (billions) of people worldwide. Maybe support for the last one is only in the tens/hundreds of millions if you go by people's actual feelings rather than stated dogma.

My main reason for rejecting these seemingly absurd beliefs is overconfidence and the feeling that "I just don't care about being uncertain on these things." That said, it could also be rational in some sense to ignore these possibilities. Entertaining alternate viewpoints carries some risk of adopting them contrary to one's present wishes, because the mind is leaky and hard to control. When you're very sure you don't want to change your mind on something, it makes sense not to change your mind on it. :) It's really that simple. If the feeling that you don't want to revise your opinion is stronger than your feeling that you should listen to what a changed version of you would feel, then you don't have to revise your opinion.

The above argument applies for non-realist "failure of introspection" arguments. For the realism argument, the claim is that ignoring these possibilities is making an actual epistemic error rather than just picking how much you want to care about something. I guess my present stance is (a) mostly to say I still don't care enough and ignore it even if it's epistemically irrational but also (b) give a tiny sliver of credence that I'm wrong about the logic of realism and what it implies, but this has little practical impact on my conclusions. If I get pinned into a situation where it seems like I need to revise my views given the tiny probability that realism is true combined with a small update that it would require me to make, I may either make that update or else say I (irrationally) don't care enough.

William: Do the philosophers of moral realism you cited claim (a) only that moral truths exist or also (b) that it's somehow _factually_ incorrect not to care about these moral truths? If it's just (a), then at least I can understand the claim, and I would simply choose not to care about moral truths. If it's (b), I can't understand what this even means, but because I do care a little bit about not being factually incorrect, I would care a little bit about the implications of realism, unless I chose to be irrational by rejecting those implications.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Giving to the Centre for Effective Altruism

Postby Arepo on 2012-06-06T09:11:00

Alan Dawrst wrote: But how do we know how much modesty to use and with whom? If we counted all humans alive today equally, then Catholics would get unduly high weight because they happen to use less birth control. And it seems that recently deceased people should count too. But how far back do we go? Back to the Pleistocene?


Hmm, nice point. Why limit ourselves to looking backwards, too? We can extrapolate something about what future generations will believe, and we probably expect them to outnumber us many times over...
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Giving to the Centre for Effective Altruism

Postby utilitymonster on 2012-06-06T16:55:00

Quick comment: If you use disagreement as a heuristic, the crucial thing is not how many people have view X, but something like the independence-weighted number of them that have view X.

utilitymonster
 
Posts: 54
Joined: Wed Jan 20, 2010 12:57 am

Re: Giving to the Centre for Effective Altruism

Postby Pat on 2012-06-06T18:45:00

Will addresses these arguments in section 7.2 (p. 25) with two responses.
First, it seems to me reasonable to have a fairly small credence in such views (depending of course on one’s precise epistemic situation). The above views have little in the way of compelling argument supporting them: this is why there are few who do research in academic ethics who hold such views. If one thinks that academic ethicists are considerably more likely to have correct moral views than the average American, then one could quote reasonably give the above views low credence. Moreover, the reason why one might think that one should have something approaching a moderate credence in such views is because of the large proportion of people who hold such views. But the size of the group is only relevant to the evidential import of disagreement if one person’s judgment, within that group, is independent of the others’: that is, if one cannot predict the view of one person, within the group, merely on the basis of knowing that another person within the same group holds the same view. But the judgments of those who hold such views are not independent of each other, because the views held above are rarely held by those who are not religious, and belief in those views seems in general to be a direct effect of the religious belief. One’s credence that these views are correct should not be much greater, therefore , than one’s credence that the religious views that support them are correct. And it seems to me it would be reasonable to have very low credence in such thick religious views.

Second, they are not issues on which there is no loss to not engaging in the purportedly immoral practice. According to any moral view on which increasing the amount of wellbeing in the world is morally important, it can be immoral not to engage in homosexual sex, or sex outside of marriage. And even on moral views for which there is no moral reason to increase one’s own wellbeing, there are still often strong prudential reasons for engaging in homosexual sex, or sex outside of marriage, or suicide.

Against the first point, someone might argue that the difference between academic philosophers and religious fundamentalists might be smaller than it seems. But it seems quite reasonable to give more weight to the opinions of philosophers than to those of cavemen. I think you should give little weight to the opinions of somebody who hasn't heard an argument that is important to your position on a given issue. Less weight should be given to opinions that can be explained as the result of biases.

The second point might be vulnerable to Pascalian reasoning if you give any weight at all to the views of religious fundamentalists. Homosexual sex might be fun, but the small chance of its causing you to suffer for eternity would seem to outweigh any transient pleasure derived from it.

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska

Re: Giving to the Centre for Effective Altruism

Postby Ruairi on 2012-06-07T21:05:00

Alan Dawrst wrote:
Pat wrote:There are both utopian and dystopian scenarios, and it's not clear to me which are more likely.

Me too.


Sorry this is a bit (very) off topic but the amount of uncertainty regarding this incredibly important questions seems really high! Is there anywhere where most people who care about these issues will see it where we could have a big discussion of the whole issue and maybe discuss how people could go about researching these issues?

EDIT: sorry not picking on either of you guys for being uncertain! I think there is a lot of uncertainty as regards this!
User avatar
Ruairi
 
Posts: 392
Joined: Tue May 10, 2011 12:39 pm
Location: Ireland

Re: Giving to the Centre for Effective Altruism

Postby Brian Tomasik on 2012-06-08T14:07:00

Pat wrote:But it seems quite reasonable to give more weight to the opinions of philosophers than to those of cavemen.

Why? Couldn't this be due to our cognitive biases as well? Why are we more certain about which people to trust than which specific opinions to trust? Maybe this part is just taken as more axiomatic, so that beliefs about particular moral issues can be less axiomatic.

One’s credence that these views are correct should not be much greater, therefore , than one’s credence that the religious views that support them are correct.

I don't think these "religious" values have much at all to do with the veracity of the religion. Indeed, since we know the religion is false and yet these values got attached to it, they must have already existed outside of the religion. Else where did they come from?

Maybe there are some behaviors that derive from a belief that a man in the sky is watching you that you wouldn't maintain otherwise (e.g., hangups about sex: "These lesser males generally do not try to mate with females while a dominant male is watching because this would be a sure way to get beat up.") But these aren't really moral values so much as fears, unless they become exalted to the level of virtues.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Giving to the Centre for Effective Altruism

Postby Brian Tomasik on 2012-06-08T14:11:00

Ruairi wrote:Is there anywhere where most people who care about these issues will see it where we could have a big discussion of the whole issue and maybe discuss how people could go about researching these issues?

Well, we've had several on Felicifia and a few on Facebook. It's tough, because the future has exponentially many possible outcomes, and different people have different opinions on which outcomes are acceptable and which aren't. If we don't ever have an epic discussion, we can at least continue learning bits and pieces through smaller discussions.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Giving to the Centre for Effective Altruism

Postby Pat on 2012-06-15T17:08:00

Alan Dawrst wrote:
Pat wrote:But it seems quite reasonable to give more weight to the opinions of philosophers than to those of cavemen.

Why? Couldn't this be due to our cognitive biases as well?

Do you mean that cavemen are hairy and smelly, but philosophers are clean and make better dinner guests, so we irrationally give philosophers' opinions a free pass? Yes, but I don't think that's the whole story.
Why are we more certain about which people to trust than which specific opinions to trust? Maybe this part is just taken as more axiomatic, so that beliefs about particular moral issues can be less axiomatic.

Laziness. I don't have time to learn much philosophy, science, or other relevant information, so it's easier to just survey a field and average the beliefs of experts. Some people say "Experts? Bah! What do they know," since experts are bad at predicting things and are often wrong. But I still trust them more than I do Joe Everyman, or myself. Even if I could invest a lot of effort in learning about the relevant issues, I'd be just another expert.

Humility. My beliefs are based on unfounded assumptions, influenced by cognitive biases, and ignorant of important evidences. So are everybody else's, but by taking their views into account, I might be able to neutralize some of the deficiencies get closer to the truth.

Alan Dawrst wrote:It's like blowing the seeds off of a dandelion on a windy day and then hoping you can go pick them all up without losing any.

You were talking about how high the ceiling is, rather than how tall the person is who's trying to stand up in the room.

Heehee you're good at coming up with metaphors. :)

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska

Re: Giving to the Centre for Effective Altruism

Postby Brian Tomasik on 2012-06-20T14:37:00

Pat wrote:Do you mean that cavemen are hairy and smelly, but philosophers are clean and make better dinner guests, so we irrationally give philosophers' opinions a free pass? Yes, but I don't think that's the whole story.

There's no objective fact of the matter about which biases are "good" and which are "irrational." Occam's razor is a bias, but most of us like it, so we don't tend to call it such.

There's no fundamental reason to prefer cavemen or philosophers. It seems plausible that some people would hold the preference for philosophers on the same axiomatic level as they hold some of their moral views. If the preference for philosophers is not more fundamental than our preference for certain moral views, then the fact that philosophers disagree with those views doesn't necessarily mean we should give the philosophers any weight.

Pat wrote:Heehee you're good at coming up with metaphors. :)

And silly rhymes, like:

Suzy smiles at similes at the seashore.
The maid made a metaphor for the metal floor.
One met a four and became five.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Giving to the Centre for Effective Altruism

Postby Jesper Östman on 2012-07-24T20:27:00

lol

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Giving to the Centre for Effective Altruism

Postby Rupert on 2012-08-10T11:51:00

I was just having a look at Will's paper. If it is an open question whether killing a non-human animal is on a par with killing an innocent person, does that mean that there is a moderate probability that I should boycott the products of all commercial agriculture, including plant-based agriculture?

Rupert
 
Posts: 42
Joined: Tue May 26, 2009 6:42 am

Re: Giving to the Centre for Effective Altruism

Postby Jesper Östman on 2012-08-10T20:25:00

Why plant based? For a classical utilitarian long term consequences should dominate in either case.

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Giving to the Centre for Effective Altruism

Postby Rupert on 2012-08-11T04:34:00

Plant-based agriculture causes animals to suffer and die in various ways: animals get chopped up in combine harvesters, are killed by pesticides, removal of crop cover deprives animals of a food source and exposes them to predators. If I am a classical utilitarian then I can argue that consuming the products of commercial plant-based agriculture gives me opportunities to prevent more suffering in other ways, because I would otherwise be too busy growing my own food, or would have to die of starvation. But I was talking about Will Crouch's paper which says that I should give at least moderate credence to the view that many nonhuman animals have a right to life in a sense incompatible with utilitarianism.

Rupert
 
Posts: 42
Joined: Tue May 26, 2009 6:42 am

Re: Giving to the Centre for Effective Altruism

Postby Hedonic Treader on 2012-08-11T11:25:00

Rupert wrote:Plant-based agriculture causes animals to suffer and die in various ways: animals get chopped up in combine harvesters, are killed by pesticides, removal of crop cover deprives animals of a food source and exposes them to predators. If I am a classical utilitarian then I can argue that consuming the products of commercial plant-based agriculture gives me opportunities to prevent more suffering in other ways, because I would otherwise be too busy growing my own food, or would have to die of starvation.

Would you grow your own food without using harvesters, pesticides or removal of crop covers? Can 7 billion other people do the same?

I think there are examples of greenhouses that grow plant-based food industrially without chopping up animals. Furthermore, they may replace wild animal populations who would otherwise be chopped up by predators frequently. I don't know enough about the science of agriculture to conclude whether there are ways to solve these problems sustainably and reliably on a large scale. I do think it's good to raise the topic, but there should be a broader social discussion instead of do-it-yourself solutions - unless you can use them as a proof of concept that can be brought into the discussion and adopted by the rest of the world, never ignoring, of course, the economic realities of the food industries. In other words, it has to be cheap and scalable enough.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Giving to the Centre for Effective Altruism

Postby Rupert on 2012-08-11T13:05:00

Yes but I'm talking about Will Crouch's paper about moral recklessness and moral caution

http://oxford.academia.edu/WilliamCrouc ... ion_Draft_

which says that I should give moderate credence to the view that killing a nonhuman animal is about as bad as killing an innocent person. What would follow if that were the case?

Rupert
 
Posts: 42
Joined: Tue May 26, 2009 6:42 am

Re: Giving to the Centre for Effective Altruism

Postby Jesper Östman on 2012-08-13T14:33:00

Ah, now I see (given the context). Seems like a good point. I guess many non-utilitarian positions might allow you to kill to survive, but I'm not sure whether they allow killing such numbers. Also, I guess most individual people involved here could survive (even if not all humanity could do it) without using such agro methods.

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Giving to the Centre for Effective Altruism

Postby Rupert on 2012-08-13T15:38:00

But on the other hand Will Crouch also says that we should also give moderate credence to the view that we have strong duties to benefit, and growing all your own food probably involves sacrificing opportunities to benefit sentient beings in other ways, so it looks like there is a conflict here. Where does the option of optimal expected choiceworthiness lie?

Rupert
 
Posts: 42
Joined: Tue May 26, 2009 6:42 am


Return to General discussion