Note
I originally wrote this piece in an internal strategy forum with some other negative and negative-leaning utilitarians.
Summary
To avoid accidentally creating new extinction-risk reducers (ERRs), we should potentially do our outreach among existing ERRs rather than to a more general audience.
EA -> ERR
A few weeks ago, Adriano Mannino and Lukas Gloor raised the concern that, assuming I'm right that extinction-risk reduction (ERR) could be net bad in expectation, if it's bad enough, then bringing new people into the EA movement could be net harmful because it would create new ERRs along with valuable new people who care about reducing animal/future suffering. This is a difficult idea to swallow, but I don't think that it's necessarily wrong. It might be wrong if it turns out that ERR isn't so bad after all -- not because I'm in favor of colonization but because reducing extinction risks also reduces non-extinction-level social catastrophes that could lead to greater violence and suffering in a post-human future.
However, I don't know if it's correct that ERR isn't net harmful. I worry in the back of my mind that it still might be. So the specter that Adriano/Lukas raised remains salient. It has been like a cloud hanging over my thoughts for the past few weeks, casting shadows of doubt on something I had previously assumed was very valuable -- namely, promoting general EA.
In the short term, my expectation for the sign of EA promotion may be near zero: Maybe slightly negative insofar as it creates new ERRs but also slightly positive insofar as it creates new animal advocates and people concerned with future suffering.
I guess this has been sort of true for a while: I've always been nervous about people donating to 80K / CEA because some of the money would be used on ERR work. That said, I haven't thought as much about whether there might be downsides in promoting 80K to my friends. Hopefully it's not seriously bad to do so, but maybe I should try to cut back on it?
I think EAA specifically is safer, because few EAA people also are interested in ERR (although maybe this will change? :/). The existence of EAA provides an animal-welfare outlet for 80K people who might otherwise drift to the ERR cause; OTOH, some EAA people will probably find ERR through EAA. It's not clear which directional flow is stronger. In the unlikely event that the directional flow of resources to/from ERR through EAA is actually zero, then the value of EAA would be only its direct value. However, this seems dubiously convenient, so I should keep thinking about it further.
Animal Ethics (AE) seems even safer than EAA in the sense that it's not directly connected to CEA. The main risk seems to come from the fact that it's still part of the EA movement in general, and its members have a lot of ERR connections. (Indeed, some of my best friends are ERRs... )
Future-suffering outreach -> ERR?
More recently, Adriano proposed an even more sinister (if less likely) scenario: What if future-suffering outreach itself, even if it has few connections with the general EA movement, still creates ERRs because most people don't share our values? If we talk about futurism scenarios at all, many people may realize that colonization is important to them and may want to support that in spite of the risk of astronomical suffering.
The reason I think this is less likely is that the future-suffering message would be mainly focused on the suffering rather than on the possibilities for desirable space colonization. When people read 1984, they don't think, "Wow, what a great future we could have if we used those technologies for good ends." Instead, they say, "Those are scary things people do with technology (e.g., telescreens or whatever), so let's try to avoid that." I think something similar might happen with future suffering to some degree.
That said, I think future-suffering outreach still carries some risk in the same way as the other EA causes do: Inevitably it would eventually connect new people with the general EA movement, and at that point they might decide they care about other things besides reducing suffering (e.g., life, consciousness, complexity, truth, beauty, art, religion, knowledge, or whatever).
The safest strategy
The above thoughts made me even more worried. I wasn't sure if I should pause what I'm doing, go hide under a rock for a few years, figure out what I think about all of this, and then come back. Doing that still doesn't sound like a bad idea.
Eventually I realized another point, which in retrospect seems obvious: In addition to (a) risking creating new ERRs by bringing in new people or (b) doing nothing except studying the topic more, we have a third option: (c) recruit support and exert influence among people who are already ERRs, e.g., among LessWrong, MIRI, FHI, GRCI, etc. people. LessWrong is a big place and probably has some negative-leaning people hiding. Maybe we could help bring them out into the open so that they don't meekly go along with the ERR crowd.
Even for those who don't lean negative, there's still a very strong case to be made for "making the future better rather than making sure there is a future" even among the most positive-leaning utilitarians. I have several non-negative-leaning utilitarian friends who believe it's better to shape the trajectory of the future in beneficial ways rather than work on extinction risks. Indeed, as I told Peter Hurford:
Even if your values are not idiosyncratic like this, it's still very plausible that the leverage is higher from guiding the future well through critical decision points rather than being one more voice in support of extinction-risk measures that will already be heavily invested in by governments.
Another thought is that it's also safe for us to recruit among people who already know a lot about x risk, even if they're not ERRs themselves. The main danger comes from exposing new people to the ERR idea or convincing them to take it seriously when they had previously written it off as crazy due to small probabilities. (Future-suffering work is also based on Pascalian scenarios, so there is some danger in encouraging non-Pascalian-minded people to bite Pascalian bullets, if they might bite them in the wrong direction.)
One last group among whom it might be safe to recruit is negative utilitarians. If they're sufficiently serious about NU, they'll probably remain NU despite becoming more involved with futurism issues, although I do know at least one exception to this. There are a few NUs among the ERR crowd, so they seem like low-hanging fruit to win over. Or even better, maybe they have good reasons for supporting ERR despite NU, which I can learn from, thought I'm currently doubtful.
Some next steps
I originally wrote this piece in an internal strategy forum with some other negative and negative-leaning utilitarians.
Summary
To avoid accidentally creating new extinction-risk reducers (ERRs), we should potentially do our outreach among existing ERRs rather than to a more general audience.
EA -> ERR
A few weeks ago, Adriano Mannino and Lukas Gloor raised the concern that, assuming I'm right that extinction-risk reduction (ERR) could be net bad in expectation, if it's bad enough, then bringing new people into the EA movement could be net harmful because it would create new ERRs along with valuable new people who care about reducing animal/future suffering. This is a difficult idea to swallow, but I don't think that it's necessarily wrong. It might be wrong if it turns out that ERR isn't so bad after all -- not because I'm in favor of colonization but because reducing extinction risks also reduces non-extinction-level social catastrophes that could lead to greater violence and suffering in a post-human future.
However, I don't know if it's correct that ERR isn't net harmful. I worry in the back of my mind that it still might be. So the specter that Adriano/Lukas raised remains salient. It has been like a cloud hanging over my thoughts for the past few weeks, casting shadows of doubt on something I had previously assumed was very valuable -- namely, promoting general EA.
In the short term, my expectation for the sign of EA promotion may be near zero: Maybe slightly negative insofar as it creates new ERRs but also slightly positive insofar as it creates new animal advocates and people concerned with future suffering.
I guess this has been sort of true for a while: I've always been nervous about people donating to 80K / CEA because some of the money would be used on ERR work. That said, I haven't thought as much about whether there might be downsides in promoting 80K to my friends. Hopefully it's not seriously bad to do so, but maybe I should try to cut back on it?
I think EAA specifically is safer, because few EAA people also are interested in ERR (although maybe this will change? :/). The existence of EAA provides an animal-welfare outlet for 80K people who might otherwise drift to the ERR cause; OTOH, some EAA people will probably find ERR through EAA. It's not clear which directional flow is stronger. In the unlikely event that the directional flow of resources to/from ERR through EAA is actually zero, then the value of EAA would be only its direct value. However, this seems dubiously convenient, so I should keep thinking about it further.
Animal Ethics (AE) seems even safer than EAA in the sense that it's not directly connected to CEA. The main risk seems to come from the fact that it's still part of the EA movement in general, and its members have a lot of ERR connections. (Indeed, some of my best friends are ERRs... )
Future-suffering outreach -> ERR?
More recently, Adriano proposed an even more sinister (if less likely) scenario: What if future-suffering outreach itself, even if it has few connections with the general EA movement, still creates ERRs because most people don't share our values? If we talk about futurism scenarios at all, many people may realize that colonization is important to them and may want to support that in spite of the risk of astronomical suffering.
The reason I think this is less likely is that the future-suffering message would be mainly focused on the suffering rather than on the possibilities for desirable space colonization. When people read 1984, they don't think, "Wow, what a great future we could have if we used those technologies for good ends." Instead, they say, "Those are scary things people do with technology (e.g., telescreens or whatever), so let's try to avoid that." I think something similar might happen with future suffering to some degree.
That said, I think future-suffering outreach still carries some risk in the same way as the other EA causes do: Inevitably it would eventually connect new people with the general EA movement, and at that point they might decide they care about other things besides reducing suffering (e.g., life, consciousness, complexity, truth, beauty, art, religion, knowledge, or whatever).
The safest strategy
The above thoughts made me even more worried. I wasn't sure if I should pause what I'm doing, go hide under a rock for a few years, figure out what I think about all of this, and then come back. Doing that still doesn't sound like a bad idea.
Eventually I realized another point, which in retrospect seems obvious: In addition to (a) risking creating new ERRs by bringing in new people or (b) doing nothing except studying the topic more, we have a third option: (c) recruit support and exert influence among people who are already ERRs, e.g., among LessWrong, MIRI, FHI, GRCI, etc. people. LessWrong is a big place and probably has some negative-leaning people hiding. Maybe we could help bring them out into the open so that they don't meekly go along with the ERR crowd.
Even for those who don't lean negative, there's still a very strong case to be made for "making the future better rather than making sure there is a future" even among the most positive-leaning utilitarians. I have several non-negative-leaning utilitarian friends who believe it's better to shape the trajectory of the future in beneficial ways rather than work on extinction risks. Indeed, as I told Peter Hurford:
even if reducing extinction were good (and I think it's not), there could be higher leverage in letting other people do it and trying to make the result better.
The difference between a future with and without utilitronium could be like 1000x the difference between survival vs. extinction, even if extinction were bad.
That's an example of how values can dominate the calculations vs. just survival or not. In general, if you want to maximize a specific thing X, it's probably better to focus on X than to focus on survival per se.
Even if your values are not idiosyncratic like this, it's still very plausible that the leverage is higher from guiding the future well through critical decision points rather than being one more voice in support of extinction-risk measures that will already be heavily invested in by governments.
Another thought is that it's also safe for us to recruit among people who already know a lot about x risk, even if they're not ERRs themselves. The main danger comes from exposing new people to the ERR idea or convincing them to take it seriously when they had previously written it off as crazy due to small probabilities. (Future-suffering work is also based on Pascalian scenarios, so there is some danger in encouraging non-Pascalian-minded people to bite Pascalian bullets, if they might bite them in the wrong direction.)
One last group among whom it might be safe to recruit is negative utilitarians. If they're sufficiently serious about NU, they'll probably remain NU despite becoming more involved with futurism issues, although I do know at least one exception to this. There are a few NUs among the ERR crowd, so they seem like low-hanging fruit to win over. Or even better, maybe they have good reasons for supporting ERR despite NU, which I can learn from, thought I'm currently doubtful.
Some next steps
- I would like to have discussions with ERR people and see what they think about future-suffering stuff.
- As always, Michael Vassar's exhortation to "keep learning" remains as strong as ever. Reading/thinking/talking more about the sign of human survival and the sign of ERR efforts should remain one of the top priorities.