Summary

Internal rates of return for charity are high, but they may not be as high as they seem naively. Haste is important, but because long-term growth is logistic rather than exponential, it's less important than has been suggested by some. That said, if artificial general intelligence (AGI) comes soon and exponential growth does not level off too quickly, naive haste may still be roughly appropriate. There are other factors for and against haste that parallel donate-vs.-invest considerations.

Introduction

In a thought-provoking blog post on 80,000 Hours, M. Wage describes "The haste consideration" for altruists. The idea is that if you can convince someone else to become as passionate and effective at altruism as you are, then you will have done as much good as if you had spent a lifetime on altruism yourself. I believe this is equivalent to thinking in terms of internal rates of return on activism, where the annual rate of return r is such that (1+r)^N = 2 for N being how many years it takes to create another person like yourself.

On further examination, I think the haste consideration is overly optimistic about the time-value of activism, as I explain below.

Exponential returns?

It's common to use exponential returns on wealth in the financial domain. Why is this? There's probably extensive discussion in the economics literature of which I'm not aware, so let me take my best guess from a layman's perspective.

People have finite lifetimes. Most people expect to die within ~100 years, so they don't worry about financial considerations for themselves beyond that point. (Maybe they care about their kids and grandkids, but they probably don't think consciously beyond seven generations down the line.) If your goal is to maximize your wealth by the time you die, then you're going to focus on growing your wealth as much as possible in the intervening years.

Right now, the earth allows for year-over-year economic growth that compounds exponentially. Right now, if you have $94 today, you can turn it into, say, $100 next year by investing in capital markets. But exponential growth can't occur forever. Eventually, the galaxy would be colonized to its maximum level, resource extraction would be as efficient as is physically possible, and there wouldn't be room left to grow. Maybe people could keep pushing farther and farther into space, but even if we could do that and if the resources further out in space were equally useable as those nearby, the volume occupied would be proportional to the cube of time since expansion, so that growth would be at most quadratic (derivative of a cubic is quadratic). Now, maybe weird physics scenarios would allow for eternal exponential growth, but our default assumption is that growth will eventually have to end.

(NOTE: I'm not supporting the expansion of humanity throughout the galaxy, because I think this could spread wild-animal suffering, sentient simulations, suffering subroutines, etc. I'm just pretending to be an economist for purposes of illustration.)

Now, suppose that unlike normal people who only worry about wealth in their lifetimes, you cared about the sum of total wealth over the whole future. At the beginning, the growth of wealth is basically exponential, but toward the end, it would be bounded. The overall growth curve might be logistic.

Take a simple example of logistic growth. The numbers represent wealth in each time period:

1 2 4 7 14 24 36 46 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 (end of universe)

If you'll only live for 5 years, the growth looks basically exponential to you -- with an almost 100% annual rate of return. But what if you care about the sum total of wealth over all time? Right now, the sum total is 884. If you sped things up one year, say by changing the first "1" into a "2":

2 4 7 14 24 36 46 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 (end of universe)

then the sum total of wealth would be 933, which is a 5.5% return. It's far less than the apparent 100% return that you saw during your lifetime.

Altruism examples

The same idea has many applications. For example, it's tempting to suggest that promoting veg*ism has a high internal rate of return (say, 20%) because when we create new veg*ans, they go on to convert their friends and family. This is true, but even if we see veg*ism growing 20% (say) year over year, this doesn't mean necessarily that veg outreach now is 20% more important than veg outreach next year. For example, if everyone would eventually become veg anyway, we would be only be shifting the logistic curve left, like in the above example, and the total returns in terms of reduced suffering would be far more modest.

One can make similar comparisons for most altruistic projects: Promoting economic development, building a new movement, etc.

It's important to note that I'm not saying activism is useless. For example, if we don't raise concern for the suffering of wild animals, maybe society will never come to adopt that position. Maybe it will instead slide toward deep ecology and spreading life far and wide throughout the universe. So even if the growth of the movement to prevent the spread of wild-animal suffering is logistic, working on the issue can tip the balance between which future we end up with for billions of years to come. Thus, the work definitely needs to be done. It's just that doing it next year may not be substantially worse than doing it today.

Reasons for haste

All of that said, there remain reasons why altruism sooner is more important than altruism later.

1. AGI

The human futures where the most suffering is on the line are those futures where humans develop AGI that shapes the course of our future light cone. It's possible that when the AGI is created, certain values will be locked in and maintained by goal-preservation mechanisms. If so, then shaping humanity's values would matter a lot before that locking in happens but hardly at all afterward.

In this case, there is indeed a sort of "lifetime" put on our activism: What we do only matters until AGI comes along. (If AGI doesn't ever come along, what we do doesn't matter nearly as much anyway, so we can ignore those scenarios.) If our influence on AGI is a monotonically increasing function of the amount of support we have at the point of AGI's arrival, then a year of extra haste could matter a lot, depending on when the logistic plateau happens. If it happens relatively soon (within a few decades), then likely haste won't have a huge effect on AGI. If the growth curve continues to be steep at AGI-time, then starting a year or two earlier would have been quite a bit better.

2. Burnout risks

Each of us has a risk of jumping off the altruism boat. It may seem hard to imagine now, but as people age, they become less idealistic and change their habits and preferences. For example:

This means you should discount your future years according to the probability that you will have become disillusioned and apathetic by that time. This consideration creates a "discount rate" of its own.

3. Financial returns

Exponential returns in capital markets will probably continue for several decades, so to some extent, they can be a lower bound on the rate of return that you should use for future years when it comes to financial matters. For example, if you're deciding whether to do fundraising this year or next year, doing it this year is at least ~7% better in expectation -- or whatever percentage you think is appropriate for real rates of return on the stock market -- because the money will grow by next year. There may be diminishing returns to wealth, which need to be factored in.

4. Other things

As the world population grows, your proportional share of influence over humanity may decline a little. As wealth and power grow around you, you have to run just to stay in place. There may be other effects like these that militate in favor of acting sooner.

Reasons for patience

This post has by now become a lot like the Donate vs. Invest thread. There, we cited the single most important reason why grasshoppers should exercise patience: Returns on wisdom. There's a lot to learn, and even after studying these matters for many years, my estimates of cost-effectiveness of various options change by factors of 1.5, 3, 10, 100, etc. They may even change sign: For example, I used to think reducing extinction risk was maybe net good in expectation; now I think it's probably net bad.

These factors may easily exceed internal rates of return on direct activism. But realize that direct activism is one of the best ways to learn about the world in the first place (as well as to reduce your risk of burnout), so I don't think that patience and haste are completely incompatible: I think doing things now but reserving substantial time for longer-term reflection on the global landscape of cost-effectiveness maybe the best approach. You can't get a complete picture of how to do effective activism from an armchair -- you have to actually spend some time trying it. But you also should avoid getting so caught up in the day-to-day details that you neglect broader contemplation.

Relevant comments from others

Here are some snippets from the comments on the original "haste consideration" thread that coincide with my remarks:

Future work on this topic

In general, it's hard to compute internal rates of return from charitable activities. We don't know the parameters (length, height, slope) of the logistic curve we face, so it's difficult to estimate the value of shifting it left by some amount, especially given the uncertainty over when AGI happens. Maybe the EA community will eventually build better frameworks of thought in this regard. It's quite possible such frameworks already exist and we haven't discovered them yet -- surely economists must have decision models beyond exponential discounting?

Internal rates of return for charity are high, but they may not be as high as they seem naively. Haste is important, but because long-term growth is logistic rather than exponential, it's less important than has been suggested by some. That said, if artificial general intelligence (AGI) comes soon and exponential growth does not level off too quickly, naive haste may still be roughly appropriate. There are other factors for and against haste that parallel donate-vs.-invest considerations.

Introduction

In a thought-provoking blog post on 80,000 Hours, M. Wage describes "The haste consideration" for altruists. The idea is that if you can convince someone else to become as passionate and effective at altruism as you are, then you will have done as much good as if you had spent a lifetime on altruism yourself. I believe this is equivalent to thinking in terms of internal rates of return on activism, where the annual rate of return r is such that (1+r)^N = 2 for N being how many years it takes to create another person like yourself.

On further examination, I think the haste consideration is overly optimistic about the time-value of activism, as I explain below.

Exponential returns?

It's common to use exponential returns on wealth in the financial domain. Why is this? There's probably extensive discussion in the economics literature of which I'm not aware, so let me take my best guess from a layman's perspective.

People have finite lifetimes. Most people expect to die within ~100 years, so they don't worry about financial considerations for themselves beyond that point. (Maybe they care about their kids and grandkids, but they probably don't think consciously beyond seven generations down the line.) If your goal is to maximize your wealth by the time you die, then you're going to focus on growing your wealth as much as possible in the intervening years.

Right now, the earth allows for year-over-year economic growth that compounds exponentially. Right now, if you have $94 today, you can turn it into, say, $100 next year by investing in capital markets. But exponential growth can't occur forever. Eventually, the galaxy would be colonized to its maximum level, resource extraction would be as efficient as is physically possible, and there wouldn't be room left to grow. Maybe people could keep pushing farther and farther into space, but even if we could do that and if the resources further out in space were equally useable as those nearby, the volume occupied would be proportional to the cube of time since expansion, so that growth would be at most quadratic (derivative of a cubic is quadratic). Now, maybe weird physics scenarios would allow for eternal exponential growth, but our default assumption is that growth will eventually have to end.

(NOTE: I'm not supporting the expansion of humanity throughout the galaxy, because I think this could spread wild-animal suffering, sentient simulations, suffering subroutines, etc. I'm just pretending to be an economist for purposes of illustration.)

Now, suppose that unlike normal people who only worry about wealth in their lifetimes, you cared about the sum of total wealth over the whole future. At the beginning, the growth of wealth is basically exponential, but toward the end, it would be bounded. The overall growth curve might be logistic.

Take a simple example of logistic growth. The numbers represent wealth in each time period:

1 2 4 7 14 24 36 46 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 (end of universe)

If you'll only live for 5 years, the growth looks basically exponential to you -- with an almost 100% annual rate of return. But what if you care about the sum total of wealth over all time? Right now, the sum total is 884. If you sped things up one year, say by changing the first "1" into a "2":

2 4 7 14 24 36 46 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 (end of universe)

then the sum total of wealth would be 933, which is a 5.5% return. It's far less than the apparent 100% return that you saw during your lifetime.

Altruism examples

The same idea has many applications. For example, it's tempting to suggest that promoting veg*ism has a high internal rate of return (say, 20%) because when we create new veg*ans, they go on to convert their friends and family. This is true, but even if we see veg*ism growing 20% (say) year over year, this doesn't mean necessarily that veg outreach now is 20% more important than veg outreach next year. For example, if everyone would eventually become veg anyway, we would be only be shifting the logistic curve left, like in the above example, and the total returns in terms of reduced suffering would be far more modest.

One can make similar comparisons for most altruistic projects: Promoting economic development, building a new movement, etc.

It's important to note that I'm not saying activism is useless. For example, if we don't raise concern for the suffering of wild animals, maybe society will never come to adopt that position. Maybe it will instead slide toward deep ecology and spreading life far and wide throughout the universe. So even if the growth of the movement to prevent the spread of wild-animal suffering is logistic, working on the issue can tip the balance between which future we end up with for billions of years to come. Thus, the work definitely needs to be done. It's just that doing it next year may not be substantially worse than doing it today.

Reasons for haste

All of that said, there remain reasons why altruism sooner is more important than altruism later.

1. AGI

The human futures where the most suffering is on the line are those futures where humans develop AGI that shapes the course of our future light cone. It's possible that when the AGI is created, certain values will be locked in and maintained by goal-preservation mechanisms. If so, then shaping humanity's values would matter a lot before that locking in happens but hardly at all afterward.

In this case, there is indeed a sort of "lifetime" put on our activism: What we do only matters until AGI comes along. (If AGI doesn't ever come along, what we do doesn't matter nearly as much anyway, so we can ignore those scenarios.) If our influence on AGI is a monotonically increasing function of the amount of support we have at the point of AGI's arrival, then a year of extra haste could matter a lot, depending on when the logistic plateau happens. If it happens relatively soon (within a few decades), then likely haste won't have a huge effect on AGI. If the growth curve continues to be steep at AGI-time, then starting a year or two earlier would have been quite a bit better.

2. Burnout risks

Each of us has a risk of jumping off the altruism boat. It may seem hard to imagine now, but as people age, they become less idealistic and change their habits and preferences. For example:

A study led by Harvard University psychologists reveals that this is a systematic and fundamental perceptual mistake. People of all ages can clearly see how they changed and matured over the past decade, but both younger and older people underestimate the amount they will change over the next 10 years. They seem to suffer from the delusion that the person they’ve become is the real version. The researchers call it the “end of history illusion.”

This means you should discount your future years according to the probability that you will have become disillusioned and apathetic by that time. This consideration creates a "discount rate" of its own.

3. Financial returns

Exponential returns in capital markets will probably continue for several decades, so to some extent, they can be a lower bound on the rate of return that you should use for future years when it comes to financial matters. For example, if you're deciding whether to do fundraising this year or next year, doing it this year is at least ~7% better in expectation -- or whatever percentage you think is appropriate for real rates of return on the stock market -- because the money will grow by next year. There may be diminishing returns to wealth, which need to be factored in.

4. Other things

As the world population grows, your proportional share of influence over humanity may decline a little. As wealth and power grow around you, you have to run just to stay in place. There may be other effects like these that militate in favor of acting sooner.

Reasons for patience

This post has by now become a lot like the Donate vs. Invest thread. There, we cited the single most important reason why grasshoppers should exercise patience: Returns on wisdom. There's a lot to learn, and even after studying these matters for many years, my estimates of cost-effectiveness of various options change by factors of 1.5, 3, 10, 100, etc. They may even change sign: For example, I used to think reducing extinction risk was maybe net good in expectation; now I think it's probably net bad.

These factors may easily exceed internal rates of return on direct activism. But realize that direct activism is one of the best ways to learn about the world in the first place (as well as to reduce your risk of burnout), so I don't think that patience and haste are completely incompatible: I think doing things now but reserving substantial time for longer-term reflection on the global landscape of cost-effectiveness maybe the best approach. You can't get a complete picture of how to do effective activism from an armchair -- you have to actually spend some time trying it. But you also should avoid getting so caught up in the day-to-day details that you neglect broader contemplation.

Relevant comments from others

Here are some snippets from the comments on the original "haste consideration" thread that coincide with my remarks:

Zander Redwood:

all this seems to assume exponential growth of the EA sentiment given enough persuaders, but that seems ultra-optimistic. More likely we’ll quickly hit diminishing returns. The current members of 80K and GWWC are approximately the most enthusiastic and (given their Oxbridge/Ivy League locations) have some of the brightest futures of anyone in England and the US. Granted there’s still quite a lot of room to reach people, but basically the members we attract over the next couple of years will be the lowest hanging fruit.

Toby Ord:

1) The consideration that at the start of a movement, movement building could easily be the most important thing to work on is pretty solid and uncontroversial [emphasis added]. People often say: but your doing X alone won’t change much, you need to try to get thousands of people to start doing it. Or consider whether the founders of Google should have spent all their time coding or instead spent some of it hiring people to code, then also hiring people to do human resources (i.e. hiring to hire).

Sure it is slightly recursive, but there is nothing paradoxical about the basic structure, and on examination it is clearly not a pyramid scheme. Pyramid schemes are attempts to get benefits from people in the levels below you, who get benefits from those in the levels below them, which fails for many of the people because the population is finite. This is not what is happening here, as there is actually no private benefit passing at all, just a group of people working together.

2) The argument is about growing an organisation or movement in a lasting way. I agree with Ruairi that If one merely tried to get people involved to get people involved etc, you wouldn’t get much overall sustained growth (even if there was a promised future point at which they start doing first order work). It would be much more effective at convincing people to join if a large part of the organisation’s time was spent on first order work (maybe half?). This is true for 80,000 Hours and would have been true for other organisations such as Google or various movements.

3) I think the haste part of the argument (as opposed to the growth in general part) is sensitive to questions about how the rate of getting people to join drops off (e.g. is there an ultimate S-curve and what is the probability distribution of whether we reach that point). e.g. for simplicity, if there are only 4 people interested, then one might be able to convince them all early on, or later.

Future work on this topic

In general, it's hard to compute internal rates of return from charitable activities. We don't know the parameters (length, height, slope) of the logistic curve we face, so it's difficult to estimate the value of shifting it left by some amount, especially given the uncertainty over when AGI happens. Maybe the EA community will eventually build better frameworks of thought in this regard. It's quite possible such frameworks already exist and we haven't discovered them yet -- surely economists must have decision models beyond exponential discounting?