How to not support extinction-risk reduction

Whether it's pushpin, poetry or neither, you can discuss it here.

How to not support extinction-risk reduction

Postby Brian Tomasik on 2013-03-21T07:33:00

Note

I originally wrote this piece in an internal strategy forum with some other negative and negative-leaning utilitarians.

Summary

To avoid accidentally creating new extinction-risk reducers (ERRs), we should potentially do our outreach among existing ERRs rather than to a more general audience.

EA -> ERR

A few weeks ago, Adriano Mannino and Lukas Gloor raised the concern that, assuming I'm right that extinction-risk reduction (ERR) could be net bad in expectation, if it's bad enough, then bringing new people into the EA movement could be net harmful because it would create new ERRs along with valuable new people who care about reducing animal/future suffering. This is a difficult idea to swallow, but I don't think that it's necessarily wrong. It might be wrong if it turns out that ERR isn't so bad after all -- not because I'm in favor of colonization but because reducing extinction risks also reduces non-extinction-level social catastrophes that could lead to greater violence and suffering in a post-human future.

However, I don't know if it's correct that ERR isn't net harmful. I worry in the back of my mind that it still might be. So the specter that Adriano/Lukas raised remains salient. It has been like a cloud hanging over my thoughts for the past few weeks, casting shadows of doubt on something I had previously assumed was very valuable -- namely, promoting general EA.

In the short term, my expectation for the sign of EA promotion may be near zero: Maybe slightly negative insofar as it creates new ERRs but also slightly positive insofar as it creates new animal advocates and people concerned with future suffering.

I guess this has been sort of true for a while: I've always been nervous about people donating to 80K / CEA because some of the money would be used on ERR work. That said, I haven't thought as much about whether there might be downsides in promoting 80K to my friends. Hopefully it's not seriously bad to do so, but maybe I should try to cut back on it?

I think EAA specifically is safer, because few EAA people also are interested in ERR (although maybe this will change? :/). The existence of EAA provides an animal-welfare outlet for 80K people who might otherwise drift to the ERR cause; OTOH, some EAA people will probably find ERR through EAA. It's not clear which directional flow is stronger. In the unlikely event that the directional flow of resources to/from ERR through EAA is actually zero, then the value of EAA would be only its direct value. However, this seems dubiously convenient, so I should keep thinking about it further.

Animal Ethics (AE) seems even safer than EAA in the sense that it's not directly connected to CEA. The main risk seems to come from the fact that it's still part of the EA movement in general, and its members have a lot of ERR connections. (Indeed, some of my best friends are ERRs... )

Future-suffering outreach -> ERR?

More recently, Adriano proposed an even more sinister (if less likely) scenario: What if future-suffering outreach itself, even if it has few connections with the general EA movement, still creates ERRs because most people don't share our values? If we talk about futurism scenarios at all, many people may realize that colonization is important to them and may want to support that in spite of the risk of astronomical suffering.

The reason I think this is less likely is that the future-suffering message would be mainly focused on the suffering rather than on the possibilities for desirable space colonization. When people read 1984, they don't think, "Wow, what a great future we could have if we used those technologies for good ends." Instead, they say, "Those are scary things people do with technology (e.g., telescreens or whatever), so let's try to avoid that." I think something similar might happen with future suffering to some degree.

That said, I think future-suffering outreach still carries some risk in the same way as the other EA causes do: Inevitably it would eventually connect new people with the general EA movement, and at that point they might decide they care about other things besides reducing suffering (e.g., life, consciousness, complexity, truth, beauty, art, religion, knowledge, or whatever).

The safest strategy

The above thoughts made me even more worried. I wasn't sure if I should pause what I'm doing, go hide under a rock for a few years, figure out what I think about all of this, and then come back. Doing that still doesn't sound like a bad idea. ;)

Eventually I realized another point, which in retrospect seems obvious: In addition to (a) risking creating new ERRs by bringing in new people or (b) doing nothing except studying the topic more, we have a third option: (c) recruit support and exert influence among people who are already ERRs, e.g., among LessWrong, MIRI, FHI, GRCI, etc. people. LessWrong is a big place and probably has some negative-leaning people hiding. Maybe we could help bring them out into the open so that they don't meekly go along with the ERR crowd.

Even for those who don't lean negative, there's still a very strong case to be made for "making the future better rather than making sure there is a future" even among the most positive-leaning utilitarians. I have several non-negative-leaning utilitarian friends who believe it's better to shape the trajectory of the future in beneficial ways rather than work on extinction risks. Indeed, as I told Peter Hurford:
even if reducing extinction were good (and I think it's not), there could be higher leverage in letting other people do it and trying to make the result better.

The difference between a future with and without utilitronium could be like 1000x the difference between survival vs. extinction, even if extinction were bad.

That's an example of how values can dominate the calculations vs. just survival or not. In general, if you want to maximize a specific thing X, it's probably better to focus on X than to focus on survival per se.

Even if your values are not idiosyncratic like this, it's still very plausible that the leverage is higher from guiding the future well through critical decision points rather than being one more voice in support of extinction-risk measures that will already be heavily invested in by governments.

Another thought is that it's also safe for us to recruit among people who already know a lot about x risk, even if they're not ERRs themselves. The main danger comes from exposing new people to the ERR idea or convincing them to take it seriously when they had previously written it off as crazy due to small probabilities. (Future-suffering work is also based on Pascalian scenarios, so there is some danger in encouraging non-Pascalian-minded people to bite Pascalian bullets, if they might bite them in the wrong direction.)

One last group among whom it might be safe to recruit is negative utilitarians. If they're sufficiently serious about NU, they'll probably remain NU despite becoming more involved with futurism issues, although I do know at least one exception to this. There are a few NUs among the ERR crowd, so they seem like low-hanging fruit to win over. Or even better, maybe they have good reasons for supporting ERR despite NU, which I can learn from, thought I'm currently doubtful.

Some next steps
  • I would like to have discussions with ERR people and see what they think about future-suffering stuff.
  • As always, Michael Vassar's exhortation to "keep learning" remains as strong as ever. Reading/thinking/talking more about the sign of human survival and the sign of ERR efforts should remain one of the top priorities.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: How to not support extinction-risk reduction

Postby Hedonic Treader on 2013-03-21T09:14:00

Brian,

I have the impression that you mostly care about a very specific phenomenon, very intense suffering. It could be economical to focus only on this phenomenon specifically and target it directly. Convincing people that life should go extinct is harder than convincing them that the worst forms of suffering can and should be reduced.

There are some ways to target intense suffering specifically e.g. make the lowest-hanging fruit of hedonic enhancement more attractive, spread WAS awareness, strengthen individual autonomy rights, fight speciesism, give proofs of concepts of substituting functions that traditionally rely on creating suffering (in-vitro meat, wireheading sadism and aggression, ideas about pain control, alternative damage handling mechanisms, and so on).

It's not even clear deliberate ERR is very effective.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am


Re: How to not support extinction-risk reduction

Postby Hedonic Treader on 2013-03-22T07:07:00

Elijah wrote:Dunno...I suspect that extinction is our only hope...

If UWF-wide extinction does not take place, some people will be subject to eternal silicon hells.

And if time travel is not invented, Giordano Bruno burns in 1600. Usual ERR does not affect UWF-wide extinction.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: How to not support extinction-risk reduction

Postby Arepo on 2013-03-22T12:48:00

I would hate to see the EA (especially the utilitarian subset) movement suffer because of differing estimations of what at least could be an empirical question.

Given how strongly many people feel about ERR, you’re not going to manage to remove it as a key activity of either EAs or utilitarians. Having negative-sum clashes where those who think like you directly oppose those who think like Bostrom seems like a terrible outcome of all the scrupulously considered good intentioned hard work going into the cause.

A far better outcome, though logistically difficult, would be a gentleman’s agreement, where somehow each side agrees to redirect equivalent efforts to the other side from opposing them towards supporting something that both agree is good, but which both groups agree has fairly neutral expected implication for ER.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: How to not support extinction-risk reduction

Postby Brian Tomasik on 2013-03-22T20:15:00

Thanks for the comments, everyone!

First, a meta-note: I wasn't sure it was a good idea to make this post because I didn't know if some people would take it the wrong way. However, I decided to go ahead because I think honest exploration of these issues in partnership with my fellow utilitarians (whether inclined toward ERR or not) is the best way forward. As a general heuristic for life, I've found that open discussion is very often better than hiding things, for so many reasons: Establishing trust, learning things you didn't expect, ability to reach mutual understanding, and so on. I'm glad you all have taken this post in that light.

Arepo wrote:I would hate to see the EA (especially the utilitarian subset) movement suffer because of differing estimations of what at least could be an empirical question.

It's possible that empiricism alone could resolve the question. If it were the shown that even negative utilitarians should support ERR, this issue would go away. (I assign ~25-30% probability to the possibility that even NUs should support ERR.) If it were shown that even positive-leaning utilitarians should oppose ERR, the issue would also go away. However, there's a wide middle ground where the two sides would remain opposed on the basis of differing values.

I think studying this topic more is hugely important, and it's one of our main first steps. However, if you think you might be in a hole, the first step is to stop digging, so we also need to make sure we're not making things worse in the intervening period.

Arepo wrote:Given how strongly many people feel about ERR, you’re not going to manage to remove it as a key activity of either EAs or utilitarians.

It's not all or nothing. If we change the minds of 3% of ERRs (which I think is quite feasible), that could be really important. I think there are genuinely strong arguments why ERRs -- even the positive-leaning ones -- should focus on making the future better instead of making sure there is a future. It's not inconceivable to me that even 25+% of ERRs might change their minds on this (not just due to me but due to a more general movement in this direction that's already taking place).

Arepo wrote:A far better outcome, though logistically difficult, would be a gentleman’s agreement, where somehow each side agrees to redirect equivalent efforts to the other side from opposing them towards supporting something that both agree is good, but which both groups agree has fairly neutral expected implication for ER.

That's an interesting idea. :) I remember Toby suggesting a while back that pro-life and pro-choice people should get together and agree to mutual disarmament so that their funds can be redirected toward something else.

The problem, as you say, comes in the implementation. You can verify that two countries have each destroyed a nuclear warhead. It's harder to verify that the ERRs and ERR critics have each redeployed one of their members toward a neutral issue. And how to you get both sides to accurately represent the resources that they could have been spending on the effort?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: How to not support extinction-risk reduction

Postby Brian Tomasik on 2013-03-22T20:23:00

Hedonic Treader wrote:I have the impression that you mostly care about a very specific phenomenon, very intense suffering.

Yeah.

Hedonic Treader wrote:It could be economical to focus only on this phenomenon specifically and target it directly. Convincing people that life should go extinct is harder than convincing them that the worst forms of suffering can and should be reduced.

I don't know the best approach now, but as noted above, the first step is to stop digging myself into a hole. If I'm not super-explicit about the ERR concerns, then even the most sincere efforts to spread compassion for those experiencing intense suffering could cause, if not more harm than good, at least more harm than I'd like.

Hedonic Treader wrote:There are some ways to target intense suffering specifically e.g. make the lowest-hanging fruit of hedonic enhancement more attractive, spread WAS awareness, strengthen individual autonomy rights, fight speciesism, give proofs of concepts of substituting functions that traditionally rely on creating suffering (in-vitro meat, wireheading sadism and aggression, ideas about pain control, alternative damage handling mechanisms, and so on).

Thanks for the list. :) (I'm glad you're not gone from Felicifia after all!)
I love the quote in your signature referring to pain control.

Hedonic Treader wrote:It's not even clear deliberate ERR is very effective.

A few of my friends believe this, and this is one reason we're seeing some people shift away from direct ERR work. Would you like to elaborate on why it's not very effective? At this stage it's basically movement-building work in the same way WAS is -- setting the stage for more concrete work later on.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA


Re: How to not support extinction-risk reduction

Postby Hedonic Treader on 2013-03-23T05:18:00

Brian Tomasik wrote:It's harder to verify that the ERRs and ERR critics have each redeployed one of their members toward a neutral issue. And how to you get both sides to accurately represent the resources that they could have been spending on the effort?

Financial transparency + public donations to a mutually agreed on alternative charity? Of course, people could be lying about the strength of their initial motivations. Even if someone says they never lie, they could be lying. 8-)

Would you like to elaborate on why it's not very effective?

I don't have a strong case, except that "saving the world" is a do-gooder standard meme, and practically all powerful institutions in the world want self-preservation. When those act against self-preservation, it may be for game-theoretic reasons that a small to medium charity could never overpower. I'm skeptical about anyone who thinks they affect the probabilities much, without laying out exactly why.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: How to not support extinction-risk reduction

Postby Brian Tomasik on 2013-03-23T21:33:00

Hedonic Treader wrote:Financial transparency + public donations to a mutually agreed on alternative charity?

Interesting! That seems like it could plausibly work. If there were an ERR org and a concern-about-ERR org, they could both direct funds that would have gone toward their programs to an agreed-upon charity. They'd have to believe that they're about equally effective at the margin per dollar as each other in order for them to be willing to do this, and they would also have to get their donors to be on board.

In any event, it should be added that cancelling out ERR work isn't obviously the best use of funds. Basic research may take higher priority in the short term. But even there, we might have opportunities for collaboration on researching future scenarios. We could learn about the facts together and only at the end diverge in our conclusions due to our differing values.

Hedonic Treader wrote:I don't have a strong case, except that "saving the world" is a do-gooder standard meme

But very few people mean literally reducing extinction risk. Usually they mean "making the world better," which is actually what I would prefer the ERRs did. :) I wonder if this phrase arose from a religious context?

Hedonic Treader wrote:practically all powerful institutions in the world want self-preservation.

Yes, but they don't want it nearly as much as the ERRs want it, so the incentive structure is distorted. Most people don't think about astronomical waste, so they don't view 100% extinction as that much worse than 99%. Furthermore, when we get to an individual level, the incentives are even more distorted: For a person, the end of the world isn't that much worse than his own death (maybe a little worse because of the deaths of his family, etc.). Yet people often take jobs that put them at higher risk of death in exchange for other benefits. So it's easy to imagine scientists willingly risking life on Earth for the sake of career advancement. The same goes for companies risking extinction for financial success, and governments risking it for the geopolitical success.

Hedonic Treader wrote:I'm skeptical about anyone who thinks they affect the probabilities much, without laying out exactly why.

Of course they wouldn't affect the probabilities much (probably <0.01% for even a multi-million-dollar organization), but even really tiny probabilities matter here.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: How to not support extinction-risk reduction

Postby Arepo on 2013-05-10T12:51:00

Brian Tomasik wrote:I think there are genuinely strong arguments why ERRs -- even the positive-leaning ones -- should focus on making the future better instead of making sure there is a future.


No disagreement here - I've been arguing that (or at least that we should look more at alternative Pascalian wagers to ERR) pretty much since I learned of the concept of ERR.

But you seem to be skipping much too freely for my comfort between ERN (x-risk neutral) and ERI (x-risk increasing), which are hugely dissimilar. If you're going to realistically trade with anyone, you need to establish which category they fall into - there's obviously no point in anyone trading with an ERN, though there might be reasons for someone (possibly the ERN) to trick them into doing so.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am


Return to General discussion