Expected gain: John Broome's criticisms, etc.

Whether it's pushpin, poetry or neither, you can discuss it here.

Expected gain: John Broome's criticisms, etc.

Postby tog on 2011-07-02T20:34:00

What are people's thoughts on the appropriateness of focusing on expected gain to deal with uncertain or risky outcomes? I was prompted to this question by the following passage in John Broome's 'Can there be a preference-based utilitarianism?':

Uncertainty can be handled within either a theory of right or a theory of good. Within the
theory of right, utilitarians sometimes offer this principle: when choosing between acts, one
should choose the one that gives the greatest expectation of good.2 Daniel Bernoulli appears
to have assumed this,3 and it is a version of what I call ‘Bernoulli’s hypothesis’. It is
implausible, at least on the face of it, because it implies one should be neutral about risk to
good. The act that produces the greatest expectation of good may be more risky than other
options: the variance in the amount of good it leads to may be higher than for other options. If
so, perhaps one should choose a safer act that gives a lower expectation of good. We should
not take Bernoulli’s hypothesis for granted, then. But once we give it up, it is not easy to
produce a sufficiently general principle within the theory of right to handle uncertainty
convincingly.

For that reason, I think uncertainty is better handled within the theory of good.4 As a
principle of right, I think utilitarians should say that, when choosing between acts, one should
choose the one that will lead to the best prospect. Then, within their theory of good, they
should have an account of the goodness of prospects. A prospect is a portfolio of possible
outcomes, each of which might come about. The goodness of a prospect will depend on the
goodness of its possible outcomes. Bernoulli’s hypothesis implies specifically that it is the
expected goodness of its possible outcomes. But there is room within the theory of good for a
more general account of the goodness of prospects.


He doesn't offer an argument besides the claimed implausibility of thinking that 'one should be neutral about risk to good' (though footnote 4 does say 'This argument is more fully spelt out in Weighing Goods, Section 6.1. '). I don't necessarily find this implausible, but perhaps haven't thought about it enough. Are there any arguments on one side or the other?
User avatar
tog
 
Posts: 76
Joined: Thu Nov 25, 2010 10:58 am

Re: Expected gain: John Broome's criticisms, etc.

Postby Brent on 2011-07-02T23:16:00

Do you know what exactly he means when he refers to being neutral about risk to good?

Brent
 
Posts: 23
Joined: Wed Jun 08, 2011 8:29 pm
Location: Washington, DC

Re: Expected gain: John Broome's criticisms, etc.

Postby Arepo on 2011-07-03T00:10:00

All outcomes are uncertain and involve risk, so you either need to 1) always consider expected gain, 2) never consider expected gain, or 3) justify some initially arbitrary-seeming dividing line between too much uncertainty/too high a threat and the contrary after which you should change your principles.

3) seems too ludicrous for me to want to dignify it with a response here. Of 1 and 2 I much prefer 1. Part of the problem is considering a feasible alternative. Any other principle is likely to seem highly artificial, self-defeating, or both. A casual argument for 1 is that a universe in which people follow it is likely to have higher actual utility than a universe in which they don't - even very longshot calculations will eventually lead to higher utilities if we take the highest expected gain every time it comes up as long as the reward amount and probability aren't so high/low as to dwarf the universe in which we inhabit.

Even if they do if we stand by the principle that we take every rational long shot and so end up taking many 1-several-trillion shots, if we take enough of them we should still find that some of them hit the target, and so we will still (probably) end up in a better universe than one in which we didn't.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Expected gain: John Broome's criticisms, etc.

Postby tog on 2011-07-03T10:03:00

Brent wrote:Do you know what exactly he means when he refers to being neutral about risk to good?


As you can see it's a brief passage, so someone with a copy of the Weighing Good book he refers to would be better placed to answer, but I take him to mean being neutral between two acts with the same expected utility, even if one has worse potential downside (balanced by the spread of its potential upsides).
User avatar
tog
 
Posts: 76
Joined: Thu Nov 25, 2010 10:58 am

Re: Expected gain: John Broome's criticisms, etc.

Postby tog on 2011-07-04T10:51:00

Arepo wrote:All outcomes are uncertain and involve risk, so you either need to 1) always consider expected gain, 2) never consider expected gain, or 3) justify some initially arbitrary-seeming dividing line between too much uncertainty/too high a threat and the contrary after which you should change your principles.


Maybe someone who knows Broome's ideas better can explain whether he offers an option besides 3). He points to a 'more general account of the goodness of prospects' than expected gain in the passage I quoted, but doesn't explain it there.

Arepo wrote:A casual argument for 1 is that a universe in which people follow it is likely to have higher actual utility than a universe in which they don't - even very longshot calculations will eventually lead to higher utilities if we take the highest expected gain every time it comes up as long as the reward amount and probability aren't so high/low as to dwarf the universe in which we inhabit.


That seems like it could be developed into a good argument for someone who values utility for its own sake.
User avatar
tog
 
Posts: 76
Joined: Thu Nov 25, 2010 10:58 am

Re: Expected gain: John Broome's criticisms, etc.

Postby DanielLC on 2011-07-04T18:05:00

Risk is a state of our knowledge, not a state of the universe. The universe has a certain amount of good, whether or not we know what it is. Is there something intrinsically bad about not knowing how much good there is?

Uncertainty doesn't add normally. Suppose we know how good the universe is within 1 million QALYs. We then add an uncertainty of 1 thousand QALYs. Now we know how good the universe is within 1,000,000.5 QALYs. We don't know the total nearly that accurately. If we know the universe to the nearest 10^20 QALYs, and add an uncertainty of 1000 QALYs, We make a difference of about 1000^2/(2*10^20) = 5*10^-15 QALYs of uncertainty. In short, whatever uncertainty there is in our actions makes no noticeable difference to the uncertainty of the universe.

There are better ways to decrease your uncertainty. Suppose we figure out how common life on other planets is, or the total size of the universe, or how happy animals are, or how many of them are sentient. Any of those would decrease our uncertainty by huge amounts.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Expected gain: John Broome's criticisms, etc.

Postby Brent on 2011-07-05T03:34:00

OK, I guess then he is saying we can't deal with uncertainty within a theory of moral decision-making, but only in a theory of the good we hope to promote, which can then be applied to a theory of decision-making? If so, I suppose that is right but I'm not sure what the difference is in utilitarianism - our theory of how one should act is by definition based on what the good we hope to promote is.

I think if we value each unit of utility/welfare equally, then we have to be neutral to risk; that is, an action which has a guarantee of producing 10 units of welfare is equal to one which has a 50% chance of producing 40 units of welfare, but otherwise will lose 20 units of welfare ((.5*40)+(.5*-20)=10). These are also equal to one which has a 25% chance or producing 100 units of welfare, but otherwise will lose 20 units ((.25*100)+(.75*-20)=10).

This is true if we value each unit of welfare equally. That is, if the first 10 units of welfare are worth the same as the next ten, etc. etc., then it follows that we should be neutral to risk. In other words, losing a whole bunch of utility is just equally as bad as gaining an equal bunch of utility is good.

Arepo: I'm not sure I understand exactly what you mean by your 3 options, but assuming I do:

Do you think that your (2) is even possible? It seems that it would only work if we had a decision which was guaranteed to increase welfare, which probably don't exist in the real world. But even if they do exist, there clearly exist other decisions we make where none of the possible choices are guaranteed to increase welfare. In these cases we can't help but take a risk with utility.

Brent
 
Posts: 23
Joined: Wed Jun 08, 2011 8:29 pm
Location: Washington, DC

Re: Expected gain: John Broome's criticisms, etc.

Postby Brian Tomasik on 2011-07-05T07:05:00

Agree with Arepo and DanielLC! I give some additional intuition pumps for risk neutrality in this piece.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA


Return to General discussion