The following is cross-posted from my blog:
A lot of effective altruists think working on disasters threatening to seriously curtail the progress of humanity into the far future is orders of magnitude more important than events that merely improve the present. The reasoning behind this is that global catastrophic risks (GCRs) not only threaten to wipe everyone on the planet out, but also eliminate countless generations that would have existed had we not gotten ourselves killed. I think GCRs are a good thing to have people working on but I'm skeptical that they surpass more common sense causes like deworming, vaccinating, and distributing insecticide-treated bed nets.
I think we need to make a distinction between two questions. The first question is: Where do all the utilons live? The question we should be asking is: What can I do to maximize the world's goodness?
The first question is about identifying areas with high potential for impact. The second question is what effective altruism actually is. Knowing where the utilons live doesn't answer the fundamental EA research question. You can locate a mountain of utilons yet have no way to access them. If that's the case, then it's better to work on the things you can actually do something about.
The total amount of suffering on Earth is dominated by the pains of insects, invertebrates, and fish. This is where tons of utilons live. In other words, wild animal suffering reduction is an area with high potential for positive impact. If there was an action we could take that reduced a huge portion of insect suffering, for instance, that would dwarf nearly any other cause. We could call this an area that is home to a lot of utilons. But how do we access them? In order for insect suffering to rival other causes, we need to be talking about mass amounts of insects. I know of no obvious thing we could do to reduce the suffering of so many insects though. And if there was, it likely wouldn't rival interventions we could make in less utilon-populated areas. If that's the case, then the reasonable approach toward insect suffering is to keep it on the backburner while we prioritize other issues.
I think the far future, as a cause, is a lot like insect suffering. Humanity's continued survival might be the most important variable to preserve if we want to maximize and continue to maximize the world's goodness. That's where all the utilons live. But what can we do about it? There is no individual far future-related cause that stands out as especially worthwhile to me. Actually, none of them appear to me to rival the best present-related causes we know of. Most future-related causes endorsed by effective altruists are highly speculative and conjunctive. With this post, I'll make many weak arguments for why I think taking steps to reduce GCRs is not an optimal cause to work on for most people.
First, not only do these causes need to be based on arguments that actually work (e.g. AGI will come & that is dangerous), but they also require that specific important events occur within a narrow timeframe. In order for them to be our top priorities, they need to be imminent enough that we can justify ignoring other affairs for them. For example, if an intelligence explosion isn't going to happen until 400 years from now, then MIRI's work is far less important than it would be if the intelligence explosion happens in 20 years. Their work would become so much more replaceable, as it's likely good progress would be made on MIRI-relevant issues over the next 400 years. That crosses the boundary between "effective altruism" and "ordinary science." From an effective altruist perspective, the timeframe is highly relevant for claiming a cause's relative importance.
Further, in order to prioritize between different GCRs, we need to accurately predict the order in which events occur. So if "Nanotechnology will come & that is dangerous" is true, but an intelligence explosion happens first, then nanotechnology will have turned out not to have mattered nearly as much. Or if nuclear war happens, we may pass into an era in which life extension is neither desirable or possible to research. Just as competing methods lessen cause priority, so do competing ways for us to die lessen the threat of each individual cause since we're uncertain about the order in which events will happen.
Given that the main reason for prioritizing GCRs is that they threaten to wipe out billions of potential future generations, we can and should also apply the above reasoning to events that would have happened had we survived a specific GCR. Maybe AGI kills us all while nanotechnology is on pace to wipe us out 5 years after the AGI apocalypse but just never gets the chance. If we expect there to be multiple global catastrophes lined up for us in a row then (1) our efforts shouldn't be completely centered on the first one and (2) we can't speak as if each individual disaster is wiping away billions of generations. There's no reason to expect billions of generations if you foresee several serious existential risks. (The same argument applies to reducing infant mortality in really poor countries. The kid can very easily go on to die from something else way before "normal dying age" so the number of life years being saved is less than it originally sounds.)
These theories of the far future also usually leave out the details of the societies these technological advancements spring from. There is often no mention of political struggles, cultural values, economic factors, laws and regulations, etc. I find it unlikely that any GCR scenario is largely unaffected by these things. When these major events come closer and closer to their arrival dates, public discussions will likely heat up about them, politicians will get elected based on how they view them, debates will be had, laws will be passed, and so on. Many of the far future theorists leave these details out and write from the perspective of technological determinism, as if inventors give birth to new creations like Black Swan events. I think sociopolitical pressures should be seen as positive things, much more likely to prevent disasters from happening than they are to prevent humanity from dealing with them. When disasters become imminent enough to scare us, they do scare us, and people start handling them.
Another aspect of the future that often gets left out of these discussions is the possibility that included in the next billion generations will be astronomical amounts of suffering, possibly enough to outweigh future flourishing. The utility in the world right now is likely net negative. The thriving of humanity might just maximize this effect - for example, maybe by spreading animal populations to other planets. Even if we do not expect suffering to outweigh flourishing, there will very likely exist huge amounts of both good and bad experiences and we should consider what we roughly expect the ratio to be. We cannot naively talk about the immense worth of the far future without making any mention of the terrible things to be included in that future. Negative utilitarians should be especially interested in this point.
Here's an argument that I feel there's something to but I'm still figuring out. I think maybe believers in the far future's immense net value are making a philosophical mistake when they say the elimination of countless future generations is many orders of magnitude more terrible than the elimination of Earth's current 7 billion people. It's true that our 7 billion people could yield countless future generations, but this is also true of a single person. When a single person is killed, why don't we multiply the negative utility of this death by all the potential future humans it also takes away? That one individual could have had 2 kids, who each could have had 2 kids, and those kids would have had their own kids, and a billion generations later, we would have a monstrous family tree on our hands. If one death isn't a billion deaths then why are 7 billion deaths worth 7 quintillion?
If one answers that one death is a billion deaths than it seems to me as if she is amplifying the value of every individual human life way beyond what reason allows. For instance, this would make abortion a truly terrible crime. Another counter-argument could be that, in wiping out all humans, as opposed to only some, there's some kind of bonus emergent negative utility because there's no longer any possibility of future generations. The idea that groups of people should be morally valued more than the sum of the morally relevant individuals that comprise them has some problematic implications, however. We probably wouldn't want to say that it is better to save a family of five than five individuals who don't know each other. One could also argue that there is a relevant upper limit on the amount of human lives that could exist in the far future such that the Earth's current population does not significantly affect the world's future population because we will hit that upper limit anyway. That is not at all clear to me. If the response is that keeping alive a tiny probability of a massively positive future is worth more than a confirmed so-so outcome, then I think that's a case of Pascal's Mugging.
Lastly, as Holden Karnofsky pointed out in his recent conversation with MIRI, just "doing good things" has a really great track record, while the strategy of trying to direct humanity as a whole toward an optimal outcome has a comparatively weak track record. The track record is so poor that ethical injunctions might event mitigate against such grand schemes. Probably because people are prone to overlooking the sociopolitical details, they are very bad at predicting how major cultural events will affect the future. Apocalyptic predictions in particular are known for striking out, but that might be unfair. I see the flow-through effects favouring the "safe" side, as well. Just doing good things like being nice to people, donating to great charities, not eating meat, and spreading good ideas is likely to be contagious. People like people that do obviously good things, whereas people are suspicious toward those following some master plan that is supposed to pay off in a few decades or centuries, especially when those people are just regular at ordinary niceness. Valuing "weird" causes makes you less sympathetic, get taken less seriously, gain less funding and other opportunities, and become generally more marginalized.
Despite these weaknesses, it might still be a good idea for you to work mainly on GCR risk reduction since (1) it may be closest to your background, (2) the area is underfunded and underexplored, and (3) having people out there on GCR patrol increases the probability of us receiving GCR updates regularly and well in advance of any disasters. The fact that something isn't the optimal cause for you to possibly be working on doesn't mean that it isn't a good cause.
Effective altruism is about what you can actually do that would be most likely to maximize the world's goodness. "The Far Future" isn't a thing you can do - it's just where all the utilons live. Prioritizing specific GCRs seems to suffer from several problems when one takes an outside view. I see education and openness to compromise as the real best bets for global catastrophic risk reduction. Fortunately, they're easy things to promote on the side, while trying to make today's world healthier and less painful.
A lot of effective altruists think working on disasters threatening to seriously curtail the progress of humanity into the far future is orders of magnitude more important than events that merely improve the present. The reasoning behind this is that global catastrophic risks (GCRs) not only threaten to wipe everyone on the planet out, but also eliminate countless generations that would have existed had we not gotten ourselves killed. I think GCRs are a good thing to have people working on but I'm skeptical that they surpass more common sense causes like deworming, vaccinating, and distributing insecticide-treated bed nets.
I think we need to make a distinction between two questions. The first question is: Where do all the utilons live? The question we should be asking is: What can I do to maximize the world's goodness?
The first question is about identifying areas with high potential for impact. The second question is what effective altruism actually is. Knowing where the utilons live doesn't answer the fundamental EA research question. You can locate a mountain of utilons yet have no way to access them. If that's the case, then it's better to work on the things you can actually do something about.
The total amount of suffering on Earth is dominated by the pains of insects, invertebrates, and fish. This is where tons of utilons live. In other words, wild animal suffering reduction is an area with high potential for positive impact. If there was an action we could take that reduced a huge portion of insect suffering, for instance, that would dwarf nearly any other cause. We could call this an area that is home to a lot of utilons. But how do we access them? In order for insect suffering to rival other causes, we need to be talking about mass amounts of insects. I know of no obvious thing we could do to reduce the suffering of so many insects though. And if there was, it likely wouldn't rival interventions we could make in less utilon-populated areas. If that's the case, then the reasonable approach toward insect suffering is to keep it on the backburner while we prioritize other issues.
I think the far future, as a cause, is a lot like insect suffering. Humanity's continued survival might be the most important variable to preserve if we want to maximize and continue to maximize the world's goodness. That's where all the utilons live. But what can we do about it? There is no individual far future-related cause that stands out as especially worthwhile to me. Actually, none of them appear to me to rival the best present-related causes we know of. Most future-related causes endorsed by effective altruists are highly speculative and conjunctive. With this post, I'll make many weak arguments for why I think taking steps to reduce GCRs is not an optimal cause to work on for most people.
First, not only do these causes need to be based on arguments that actually work (e.g. AGI will come & that is dangerous), but they also require that specific important events occur within a narrow timeframe. In order for them to be our top priorities, they need to be imminent enough that we can justify ignoring other affairs for them. For example, if an intelligence explosion isn't going to happen until 400 years from now, then MIRI's work is far less important than it would be if the intelligence explosion happens in 20 years. Their work would become so much more replaceable, as it's likely good progress would be made on MIRI-relevant issues over the next 400 years. That crosses the boundary between "effective altruism" and "ordinary science." From an effective altruist perspective, the timeframe is highly relevant for claiming a cause's relative importance.
Further, in order to prioritize between different GCRs, we need to accurately predict the order in which events occur. So if "Nanotechnology will come & that is dangerous" is true, but an intelligence explosion happens first, then nanotechnology will have turned out not to have mattered nearly as much. Or if nuclear war happens, we may pass into an era in which life extension is neither desirable or possible to research. Just as competing methods lessen cause priority, so do competing ways for us to die lessen the threat of each individual cause since we're uncertain about the order in which events will happen.
Given that the main reason for prioritizing GCRs is that they threaten to wipe out billions of potential future generations, we can and should also apply the above reasoning to events that would have happened had we survived a specific GCR. Maybe AGI kills us all while nanotechnology is on pace to wipe us out 5 years after the AGI apocalypse but just never gets the chance. If we expect there to be multiple global catastrophes lined up for us in a row then (1) our efforts shouldn't be completely centered on the first one and (2) we can't speak as if each individual disaster is wiping away billions of generations. There's no reason to expect billions of generations if you foresee several serious existential risks. (The same argument applies to reducing infant mortality in really poor countries. The kid can very easily go on to die from something else way before "normal dying age" so the number of life years being saved is less than it originally sounds.)
These theories of the far future also usually leave out the details of the societies these technological advancements spring from. There is often no mention of political struggles, cultural values, economic factors, laws and regulations, etc. I find it unlikely that any GCR scenario is largely unaffected by these things. When these major events come closer and closer to their arrival dates, public discussions will likely heat up about them, politicians will get elected based on how they view them, debates will be had, laws will be passed, and so on. Many of the far future theorists leave these details out and write from the perspective of technological determinism, as if inventors give birth to new creations like Black Swan events. I think sociopolitical pressures should be seen as positive things, much more likely to prevent disasters from happening than they are to prevent humanity from dealing with them. When disasters become imminent enough to scare us, they do scare us, and people start handling them.
Another aspect of the future that often gets left out of these discussions is the possibility that included in the next billion generations will be astronomical amounts of suffering, possibly enough to outweigh future flourishing. The utility in the world right now is likely net negative. The thriving of humanity might just maximize this effect - for example, maybe by spreading animal populations to other planets. Even if we do not expect suffering to outweigh flourishing, there will very likely exist huge amounts of both good and bad experiences and we should consider what we roughly expect the ratio to be. We cannot naively talk about the immense worth of the far future without making any mention of the terrible things to be included in that future. Negative utilitarians should be especially interested in this point.
Here's an argument that I feel there's something to but I'm still figuring out. I think maybe believers in the far future's immense net value are making a philosophical mistake when they say the elimination of countless future generations is many orders of magnitude more terrible than the elimination of Earth's current 7 billion people. It's true that our 7 billion people could yield countless future generations, but this is also true of a single person. When a single person is killed, why don't we multiply the negative utility of this death by all the potential future humans it also takes away? That one individual could have had 2 kids, who each could have had 2 kids, and those kids would have had their own kids, and a billion generations later, we would have a monstrous family tree on our hands. If one death isn't a billion deaths then why are 7 billion deaths worth 7 quintillion?
If one answers that one death is a billion deaths than it seems to me as if she is amplifying the value of every individual human life way beyond what reason allows. For instance, this would make abortion a truly terrible crime. Another counter-argument could be that, in wiping out all humans, as opposed to only some, there's some kind of bonus emergent negative utility because there's no longer any possibility of future generations. The idea that groups of people should be morally valued more than the sum of the morally relevant individuals that comprise them has some problematic implications, however. We probably wouldn't want to say that it is better to save a family of five than five individuals who don't know each other. One could also argue that there is a relevant upper limit on the amount of human lives that could exist in the far future such that the Earth's current population does not significantly affect the world's future population because we will hit that upper limit anyway. That is not at all clear to me. If the response is that keeping alive a tiny probability of a massively positive future is worth more than a confirmed so-so outcome, then I think that's a case of Pascal's Mugging.
Lastly, as Holden Karnofsky pointed out in his recent conversation with MIRI, just "doing good things" has a really great track record, while the strategy of trying to direct humanity as a whole toward an optimal outcome has a comparatively weak track record. The track record is so poor that ethical injunctions might event mitigate against such grand schemes. Probably because people are prone to overlooking the sociopolitical details, they are very bad at predicting how major cultural events will affect the future. Apocalyptic predictions in particular are known for striking out, but that might be unfair. I see the flow-through effects favouring the "safe" side, as well. Just doing good things like being nice to people, donating to great charities, not eating meat, and spreading good ideas is likely to be contagious. People like people that do obviously good things, whereas people are suspicious toward those following some master plan that is supposed to pay off in a few decades or centuries, especially when those people are just regular at ordinary niceness. Valuing "weird" causes makes you less sympathetic, get taken less seriously, gain less funding and other opportunities, and become generally more marginalized.
Despite these weaknesses, it might still be a good idea for you to work mainly on GCR risk reduction since (1) it may be closest to your background, (2) the area is underfunded and underexplored, and (3) having people out there on GCR patrol increases the probability of us receiving GCR updates regularly and well in advance of any disasters. The fact that something isn't the optimal cause for you to possibly be working on doesn't mean that it isn't a good cause.
Effective altruism is about what you can actually do that would be most likely to maximize the world's goodness. "The Far Future" isn't a thing you can do - it's just where all the utilons live. Prioritizing specific GCRs seems to suffer from several problems when one takes an outside view. I see education and openness to compromise as the real best bets for global catastrophic risk reduction. Fortunately, they're easy things to promote on the side, while trying to make today's world healthier and less painful.