I'm wondering whether anyone is aware of literature that addresses the following sort of question (which I'm hoping to address in a paper I'm putting together):

A hundred will die without your intervention. You can intervene in one of two (mutually exclusive) ways. Intervention [1] is guaranteed to save exactly one person of the hundred, although you have no way to predict in advance who that will be (perhaps it's simply random). Intervention [2] has a 1% chance of saving all hundred people, but a 99% chance of saving no one. The question is, do you have any reason to choose one intervention over the other? If the 1% chance were marginally adjusted down or up (say to .99 or 1.01%), would that be enough to make it the case that you ought to choose [1] over [2], or [2] over [1], respectively?

Presumably, almost every sort of consequentialist will say that, as originally presented, you should be neutral between the two options. And it's hard to see the argument for non-neutrality. But it seems that most people, at least when you push the cases far enough, will think that you ought to go for the sure thing, and take the option that's guaranteed to save at least someone, rather than the option with the very small chance of saving far more people. Imagine, for instance, a tradeoff between lives saved now and some minuscule decrease in existential risk, just enough to produce greater expected utility. There are lots of potential explanations here (e.g. scope insensitivity, or rule-of-rescue intuitions), but I'm curious whether anyone's aware of an explicit defense of the claim that we should prefer [1] to [2] in the case above (or a defense of any other position on that question). And, of course, does anyone have thoughts about what's going on in this case, psychologically and/or ethically?

Thx much!

A hundred will die without your intervention. You can intervene in one of two (mutually exclusive) ways. Intervention [1] is guaranteed to save exactly one person of the hundred, although you have no way to predict in advance who that will be (perhaps it's simply random). Intervention [2] has a 1% chance of saving all hundred people, but a 99% chance of saving no one. The question is, do you have any reason to choose one intervention over the other? If the 1% chance were marginally adjusted down or up (say to .99 or 1.01%), would that be enough to make it the case that you ought to choose [1] over [2], or [2] over [1], respectively?

Presumably, almost every sort of consequentialist will say that, as originally presented, you should be neutral between the two options. And it's hard to see the argument for non-neutrality. But it seems that most people, at least when you push the cases far enough, will think that you ought to go for the sure thing, and take the option that's guaranteed to save at least someone, rather than the option with the very small chance of saving far more people. Imagine, for instance, a tradeoff between lives saved now and some minuscule decrease in existential risk, just enough to produce greater expected utility. There are lots of potential explanations here (e.g. scope insensitivity, or rule-of-rescue intuitions), but I'm curious whether anyone's aware of an explicit defense of the claim that we should prefer [1] to [2] in the case above (or a defense of any other position on that question). And, of course, does anyone have thoughts about what's going on in this case, psychologically and/or ethically?

Thx much!