The correct type of utilitarianism

Whether it's pushpin, poetry or neither, you can discuss it here.

The correct type of utilitarianism

Postby Hutch on 2012-06-10T10:45:00

There's been a discussion going on in an unrelated thread about negative vs. classical utilitarianism, but I think it's a discussion that deserves more space. So, I propose that we move it here.

Anyway, I posit that the correct form of utilitarianism is act, aggregate, classical, hedonistic, animal welfare (i.e. all beings that feel pain and pleasure) utilitarianism. (Someone remind me if I've forgotten to specify one of the many divides...)

I realize that correct is a very strong word, and perhaps not the correct one to use here. (See what I did there?) So, perhaps I should explain what I mean by this. I think that a philosophy should satisfy the following axioms:

1) Division-invariance: it should not matter how you divide up the universe, or if you subdivide it into multiple mini-universes; they should all give the same optimal action.

2) Deciding: a philosophy should put all possible universes into a totally ordered set (of unknown cardinality).

3) No intuition-fudge factors: this is largely just a particular instance of other rules, but it bears repeating: just because you've been brought up to be revolted by something, or because someone used the word "repugnant" when describing it, doesn't mean you should put a hack in your philosophy to try to get the outcome you want. That's not a philosophy any more, that's just you saying that what you think is right is in fact right.

4) It should be well defined: not something like "I want to maximize utility except when we're dealing with really evil people, like Hitler. I don't care about his happiness."

5) Logically consistent, consequentialist, blah blah blah.

A quick note: I'm defining 0 happiness to be dead/not born/unconscious/unfeeling; I'll explain why this has to be the case later.

And I think that act, aggregate, classical, hedonistic, all sentient beings utilitarianism is the way to go here.


If anyone wants to propose a different variation, I'm all ears.

Below, I'll explain why I think other variations on utilitarianism fail these test:

___________________________________

Average vs. Aggregate:

Act utilitarianism fails axiom (1) quite badly. Take the following hypothetical. Say there are two planets, planet A and planet B. Planet A has 100 people, each at happiness 1. Planet B has 1 person at happiness 1.5. You have a nuclear bomb, and are given the choice of detonating it on planet A, killing everyone except for one resident with happiness 0.8. Do you do it? If you consider planet A to be a universe in and of itself, then the answer is no: average happiness will reduce from 1.0 to 0.8. But if you consider planets A and B together to be one universe, then the answer is yes: average happiness will increase from ~1.02 to 1.15. (I'm adding in the one guy living on planet A at the end so we don't have to deal with division by 0--another lovely property of average utilitarianism.) In another post I can talk about why I think the repugnant conclusion is crap, but this post is going to be long enough as is.


Act vs. Rule:

This one is pretty obvious. Depending on how you define rule utilitarianism, it either reduces to act utilitarianism (if you consider all choices to be possible "rules"), or rests on your definition of a '"rule", making it ill-defined.

Classical vs. Negative:

Negative utilitarianism can have many different definitions. First, though, a general point about them: many are not clear on whether they apply to negative emotions people feel or experiences they have, or to negative total utility of one person at some time. The first of these is going to fail a variant of (1): the correct action is going to depend on how I divide experiences up; for instance if simultaneously I punch you and you win $1,000,000, and the happiness you get from having just won the money is greater than the pain of the punch, then if I combine those two into one experiences it'll be positive and thus not trigger NU, but if I split them up then the punch will be negative and will trigger NU. (If the punch were somehow related, maybe tangentially, to winning the money, then it might not be clear how to split it up.) So, how about versions of NU that only care about people's total happiness functions? I'll try to tackle a few. One is that preventing any harm is more important than any gain; a variant is that there are certain really really bad things which outweigh any potential good. In order to make this a philosophy, you have to better define it. Perhaps, your aggregation method is (number of people whose happiness is below X,total utility), and you compare two situations by first comparing the first entry of the tuple, and using the second as a tie-breaker? Or is it (min(lowest happiness of anyone,X),total utility), with the same method of comparison? There are too many ways of defining for me to talk about all of them now, but if anyone wants to propose a specific one as a philosophy, go ahead. Some of them are going to fail some of my axioms, but others are going to be totally consistent and well defined, just arbitrary and leading to pretty obviously wrong results.

One particular one I will talk about, though, is what Alan Dawrst submitted in the other thread. If I'm interpreting it correctly, it's that the aggregating function is U = sum (over all beings) of {-X*h if h<0, and h if h>0}, where h is happiness of the individual and X is some large positive number. My response to this is: it seems like what you're getting at is that you can imagine really horrible scenarios for people that are much much worse than you can possibly imagine someone's happiness is good; there is nothing that could happen for me that would make up for a few minutes of being burnt at the stake. I agree with this point--it's much easier to make someone very happy than very sad--but it seems to me like this is built in to utilitarianism by the fact that it will generally be the case that really bad things will cause much more negative spikes in utility for a person than really good things cause positive spikes, and the factor of X built in to your proposal is just another way of saying that you originally underestimated how shitty life can get and constructed a utility function that didn't actually go as low as people feel unhappy, and then had to introduce some large coefficient to adjust for it.

Hedonistic and Animal Welfare:

I'm going to group these two together because I think that they address largely the same point. Non-hedonistic (high and low pleasure) utilitarianism and humans-only utilitarianism are both ways of saying, "I like my type of happiness more than yours." (Is it a coincidence that "high pleasure", a concept invented by academics, is generally understood to mean pleasure from academic pursuit, or that a human-pleasure-only system was developed by humans?) Both of these, then, are poster boys for axiom (3): people putting hacks into the philosophy to justify their lifestyles. They also violate axiom (4), as they aren't even that close to well defined. Is an animal descendant, evolutionary, halfway between monkeys and humans human? How about aliens as smart as us? Similarly, what, exactly, is "high pleasure"? What does playing a board game count as? How about listening to music? How about listening to music you disapprove of?

Why 0 is defined as dead:

First, not feeling anything really should contribute 0 utility: neither good nor bad. From a different angle, any other choice of zero will mean that you have to decide how many unborn, unconceived, dead, brain-dead, or imagined people count towards total utility, because they now have an actual contribution. That's a little bit weird: dead people really shouldn't be influencing total utility. If you limit it only to live people, then first of all you're making an arbitrary distinction between people who are dead and people who are in a coma, not feeling anything, and totally brain-dead, but whose hearts are still being pumped by a machine, and second of all you're going to have to find some other point to define to be 0.


Anyway, those are my thoughts. Does anyone want to propose a different type of utilitarianism?

Hutch
 
Posts: 40
Joined: Sun Jun 10, 2012 9:58 am
Location: Boston

Re: The correct type of utilitarianism

Postby Brian Tomasik on 2012-06-10T11:42:00

Welcome, Hutch!

Hutch wrote:(Someone remind me if I've forgotten to specify one of the many divides...)

Also "total" as opposed to "prior existence."

Hutch wrote:4) It should be well defined: not something like "I want to maximize utility except when we're dealing with really evil people, like Hitler. I don't care about his happiness."

Surely the category of "really evil people" could be made precise with more explanation. But I agree that I strongly dislike the intuition that the statement expresses. And what would we do with young artist Hitler, anyway?

Hutch wrote:And I think that act, aggregate, classical, hedonistic, all sentient beings utilitarianism is the way to go here.

If anyone wants to propose a different variation, I'm all ears.

I'm with you on everything except maybe act and maybe classical depending on your pain:pleasure exchange rate. I think rule utilitarianism and global consequentialism have something going for them, and in any event, you need to specify a decision theory (causal, evidential, timeless, subjunctive, etc.).

Hutch wrote:This one is pretty obvious. Depending on how you define rule utilitarianism, it either reduces to act utilitarianism (if you consider all choices to be possible "rules"), or rests on your definition of a '"rule", making it ill-defined.

I'm guessing we mainly differ on terminology. We both agree that "good consequences overall" are the name of the game. But we should probably have a system that can handle things like Parfit's hitchhiker and the like. (That example is about egoism, but we can imagine similar cases for act-utilitarianism, like if you didn't want to give Paul the money because you could prevent more suffering by donating it to The Humane League.)

Hutch wrote:for instance if simultaneously I punch you and you win $1,000,000, and the happiness you get from having just won the money is greater than the pain of the punch, then if I combine those two into one experiences it'll be positive and thus not trigger NU, but if I split them up then the punch will be negative and will trigger NU.

I think it would depend on whether your brain brings the suffering to the level of consciousness or whether it suppresses the suffering. For example, if the sum is -10 + 20 before rising to the level of awareness, then the person only ever actually feels +10, so it's fine by NU. But if the person feels -10 and +20 both consciously, then NU would be against it, I think.

Hutch wrote:One particular one I will talk about, though, is what Alan Dawrst submitted in the other thread. If I'm interpreting it correctly, it's that the aggregating function is U = sum (over all beings) of {-X*h if h<0, and h if h>0}, where h is happiness of the individual and X is some large positive number.

Yes. Or it might be {-X*h if h < H, and h if h >= H} for some H < 0 that defines the boundary of "really bad suffering." Maybe I wouldn't multiply pinpricks by X but would for burning at the stake.

Hutch wrote:and the factor of X built in to your proposal is just another way of saying that you originally underestimated how shitty life can get and constructed a utility function that didn't actually go as low as people feel unhappy, and then had to introduce some large coefficient to adjust for it.

Yes, this may be right. What I'm trying to do when I talk about exchange rates is say that I think many people underestimate how bad some things are when they're coming up with how much happiness would be needed to outweigh those things. Because people tend to do this underestimation, I can't just look at people's actual behavior to decide what the pain/pleasure tradeoff is, because I think people often make wrong decisions in this regard. So I'm saying, very roughly, "take the exchange rate that your typical person would use, and then multiply the importance of severe pain by some amount to get the importance that I would use."

In general, these tradeoffs among emotions aren't built into the fabric of the universe; they're things we have to specify. So it's not odd that I have to tell you what X is. We always have to do that for any comparison of emotions. What's the tradeoff between burning at the stake vs. drowning? Between being terribly afraid and being hopelessly depressed? Between eating chocolate and laughing with friends? The brain has dozens of different emotions that all need to be traded off with one another. There's no 'right' answer for how this must be done, apart from what we feel about the matter.

Hutch wrote:Similarly, what, exactly, is "high pleasure"?

I agree with you that the "high pleasure" stuff is silly. I like the following quote originally mentioned on another forum:
Progress has been facilitated by the recognition that hedonic brain mechanisms are largely shared between humans and other mammals, allowing application of conclusions from animal studies to a better understanding of human pleasures. In the past few years, evidence has also grown to indicate that for humans, brain mechanisms of higher abstract pleasures strongly overlap with more basic sensory pleasures. [...]

Most uniquely, humans have many prominent higher order, abstract or cultural pleasures, including personal achievement as well as intellectual, artistic, musical, altruistic, and transcendent pleasures. While the neuroscience of higher pleasures is in relative infancy, even here there seems overlap in brain circuits with more basic hedonic pleasures (Frijda 2010; Harris et al. 2009; Leknes and Tracey 2010; Salimpoor et al. 2011; Skov 2010; Vuust and Kringelbach 2010). As such, brains may be viewed as having conserved and re-cycled some of the same neural mechanisms of hedonic generation for higher pleasures that originated early in evolution for simpler sensory pleasures.


Hutch wrote:any other choice of zero will mean that you have to decide how many unborn, unconceived, dead, brain-dead, or imagined people count towards total utility, because they now have an actual contribution.

Not to mention rocks, door handles, air molecules, etc.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: The correct type of utilitarianism

Postby Hutch on 2012-06-10T13:46:00

Alan Dawrst wrote:Welcome, Hutch!

Hutch wrote:(Someone remind me if I've forgotten to specify one of the many divides...)

Also "total" as opposed to "prior existence."

Good call. Unsurprisingly, I think total is absolutely the way to go; prior existence puts up yet another type of arbitrary barrier between those whose happiness we care about and those we don't. It also leads to weird things where the optimal decision leads to sub-optimal situations, even by the same metric, as the function is constantly changing.
Alan Dawrst wrote:
Hutch wrote:4) It should be well defined: not something like "I want to maximize utility except when we're dealing with really evil people, like Hitler. I don't care about his happiness."

Surely the category of "really evil people" could be made precise with more explanation. But I agree that I strongly dislike the intuition that the statement expresses. And what would we do with young artist Hitler, anyway?

It's a lot trickier than it looks to make this precise. For instance, do you mean people who did good or meant to do good? Meant to do good is obviously hopeless to define, so let's look at actually did good. In order to measure this you need to have some notion of what the world "would have been like" without them; a concept very ill defined, as you have to somehow choose which counterfactual universe you're comparing this one too.

But even if you are willing to sweep that under the rug, there are much more fundamental problems. For instance, was Hitler bad for the world? He killed lots of people but he also brought the world into the shape it now has, jump started the American economy, and--through the memory of his actions--has made a world much more sensitive to atrocities than it was before.
Alan Dawrst wrote:
Hutch wrote:And I think that act, aggregate, classical, hedonistic, all sentient beings utilitarianism is the way to go here.

If anyone wants to propose a different variation, I'm all ears.

I'm with you on everything except maybe act and maybe classical depending on your pain:pleasure exchange rate. I think rule utilitarianism and global consequentialism have something going for them, and in any event, you need to specify a decision theory (causal, evidential, timeless, subjunctive, etc.).

I'm still wading through decision theories, so I'll respond to this later; for now, what are you defining rule utilitarianism as? I've seen a number of different definitions... I'll get to pain:pleasure rates later.

Alan Dawrst wrote:
Hutch wrote:This one is pretty obvious. Depending on how you define rule utilitarianism, it either reduces to act utilitarianism (if you consider all choices to be possible "rules"), or rests on your definition of a '"rule", making it ill-defined.

I'm guessing we mainly differ on terminology. We both agree that "good consequences overall" are the name of the game. But we should probably have a system that can handle things like Parfit's hitchhiker and the like. (That example is about egoism, but we can imagine similar cases for act-utilitarianism, like if you didn't want to give Paul the money because you could prevent more suffering by donating it to The Humane League.)


I'm not sure I understand what the paradox is here. Fundamentally if you are going to donate the money to THL and he's going to buy lots of cigarettes with it it's clearly in an act utilitarian's interest to keep the money as long as this doesn't have consequences down the road, so you won't actually give it to him if he drives you. He might predict this and thus not give you the ride, but then your mistake was letting Paul know that you're an act utilitarian, not in being one. Perhaps this was because you've done this before, but then not giving him money the previous time was possibly not the correct decision according to act utilitarianism, because although you can do better things with the money than he can, you might run in to problems later if you keep in. Similarly, I could go around stealing money from people because I can spend the money in a more utilitarian way than they can, but that wouldn't be the utilitarian thing to do because I was leaving out of my calculation the fact that I may end up in jail if I do so.

Alan Dawrst wrote:
Hutch wrote:for instance if simultaneously I punch you and you win $1,000,000, and the happiness you get from having just won the money is greater than the pain of the punch, then if I combine those two into one experiences it'll be positive and thus not trigger NU, but if I split them up then the punch will be negative and will trigger NU.

I think it would depend on whether your brain brings the suffering to the level of consciousness or whether it suppresses the suffering. For example, if the sum is -10 + 20 before rising to the level of awareness, then the person only ever actually feels +10, so it's fine by NU. But if the person feels -10 and +20 both consciously, then NU would be against it, I think.

What, exactly, do you mean by "the person only ever actually feels +10"? Fundamentally his utility is +10 at that point, but there's some complex emotional process going on in his head which is somewhat but not fully conscious of the punch and the money, as well as his stomach which is slightly uncomfortable because he's hungry and the numbness in his left leg because it fell asleep and his lingering resentment of his ex girlfriend and all the other crap going on in his mind.
Alan Dawrst wrote:
Hutch wrote:One particular one I will talk about, though, is what Alan Dawrst submitted in the other thread. If I'm interpreting it correctly, it's that the aggregating function is U = sum (over all beings) of {-X*h if h<0, and h if h>0}, where h is happiness of the individual and X is some large positive number.

Yes. Or it might be {-X*h if h < H, and h if h >= H} for some H < 0 that defines the boundary of "really bad suffering." Maybe I wouldn't multiply pinpricks by X but would for burning at the stake.

Hutch wrote:and the factor of X built in to your proposal is just another way of saying that you originally underestimated how shitty life can get and constructed a utility function that didn't actually go as low as people feel unhappy, and then had to introduce some large coefficient to adjust for it.

Yes, this may be right. What I'm trying to do when I talk about exchange rates is say that I think many people underestimate how bad some things are when they're coming up with how much happiness would be needed to outweigh those things. Because people tend to do this underestimation, I can't just look at people's actual behavior to decide what the pain/pleasure tradeoff is, because I think people often make wrong decisions in this regard. So I'm saying, very roughly, "take the exchange rate that your typical person would use, and then multiply the importance of severe pain by some amount to get the importance that I would use."

In general, these tradeoffs among emotions aren't built into the fabric of the universe; they're things we have to specify. So it's not odd that I have to tell you what X is. We always have to do that for any comparison of emotions. What's the tradeoff between burning at the stake vs. drowning? Between being terribly afraid and being hopelessly depressed? Between eating chocolate and laughing with friends? The brain has dozens of different emotions that all need to be traded off with one another. There's no 'right' answer for how this must be done, apart from what we feel about the matter.



I completely agree that perhaps when we are practically evaluating a situation it's correct to make sure to mentally adjust for the fact that it's easy to underestimate peoples' pain, just as it can be useful to follow rules because people get confused when you don't, and as it can be useful not to tell your friend Bill that his ex-girlfriend has a new boyfriend because it'll make him irrationally upset. But that doesn't change the fact that in fact what you should be maximizing is just total utility, and that rules aren't inherently valuable, just practically so, and that Bill's ex-girlfriend does, in fact, have a new boyfriend. I guess what I'm saying is that your model may be useful for everyday decisions, but it's not inherently the correct model, just a good approximation that's easier to use than the correct one.

Alan Dawrst wrote:
Hutch wrote:Similarly, what, exactly, is "high pleasure"?

I agree with you that the "high pleasure" stuff is silly. I like the following quote originally mentioned on another forum:
Progress has been facilitated by the recognition that hedonic brain mechanisms are largely shared between humans and other mammals, allowing application of conclusions from animal studies to a better understanding of human pleasures. In the past few years, evidence has also grown to indicate that for humans, brain mechanisms of higher abstract pleasures strongly overlap with more basic sensory pleasures. [...]

Most uniquely, humans have many prominent higher order, abstract or cultural pleasures, including personal achievement as well as intellectual, artistic, musical, altruistic, and transcendent pleasures. While the neuroscience of higher pleasures is in relative infancy, even here there seems overlap in brain circuits with more basic hedonic pleasures (Frijda 2010; Harris et al. 2009; Leknes and Tracey 2010; Salimpoor et al. 2011; Skov 2010; Vuust and Kringelbach 2010). As such, brains may be viewed as having conserved and re-cycled some of the same neural mechanisms of hedonic generation for higher pleasures that originated early in evolution for simpler sensory pleasures.


Hutch wrote:any other choice of zero will mean that you have to decide how many unborn, unconceived, dead, brain-dead, or imagined people count towards total utility, because they now have an actual contribution.

Not to mention rocks, door handles, air molecules, etc.

Hutch
 
Posts: 40
Joined: Sun Jun 10, 2012 9:58 am
Location: Boston

Re: The correct type of utilitarianism

Postby Hutch on 2012-06-10T13:54:00

By the way, my off-the-cuff impression of decision theories is that they're a load of crap dreamed up by people who kept forgetting to include other peoples' reactions to their actions into their evaluation of decisions, but I'll have to spend a bit more time to have a more thorough reply. Half of the situations invented to require them seem to be non-paradoxes, and the other half seem to be ill-defined. (e.g. it's not well defined how the alien in the one box/two box paradox "knows" what you're going to do. If he can read your mind then you should think really hard about how you're only going to pick one box, or possibly try to convince yourself you will to make it more convincing, but when push comes to shove you'll obviously take two boxes if you're intelligent (and never have to play this game again; otherwise it could be more like a iterated prisoners' dilemma). If he's just really good at predicting things based on the state of molecules or something like that, then you should adjust your pre-box-allocation actions to be those that a one-box person would have, possibly even by forcing yourself to become one--but in the end there isn't much of a paradox here, just a person who's punishing you depending on how you act and appear, another thing you should take into your calculation.)

Hutch
 
Posts: 40
Joined: Sun Jun 10, 2012 9:58 am
Location: Boston

Re: The correct type of utilitarianism

Postby Brian Tomasik on 2012-06-10T14:43:00

Hutch wrote:For instance, do you mean people who did good or meant to do good?

Meant to do good.

(Note: I'm not arguing for the position you mention; I'm just playing devil's advocate.)

Hutch wrote:Meant to do good is obviously hopeless to define

Why? Juries make these assessments all the time. I would conjecture without evidence that the inter-jury agreement rate is pretty high on such matters.

Hutch wrote:In order to measure this you need to have some notion of what the world "would have been like" without them; a concept very ill defined, as you have to somehow choose which counterfactual universe you're comparing this one too.

But this is no different from when we evaluate any utilitarian choice.

Hutch wrote:what are you defining rule utilitarianism as?

I'm not the best academic expert on real utilitarian philosophers, so don't quote my answer in that context. However, what I mean by rule utilitarianism is that there are some cases where you should stick to pre-decided rules of action even if it turns out that in a particular case, doing so seems suboptimal. For example: "If you make a promise, always keep it." Once Paul Ekman takes you out of the desert, it might seem suboptimal to actually pay him the $50 instead of donating it to a better cause, but the rule about keeping your promises would advise you to do so anyway.

Another example is if you're playing prisoner's dilemma and commit to a rule to punish defections. Once your partner has defected, it might seem like the damage has been done, and you shouldn't cause more suffering. But globally, following your rule without wavering might be better for everyone.

In general, it can be really useful in game theory to be able to make binding commitments that the other side knows you'll stick to no matter what.

Hutch wrote:If he's just really good at predicting things based on the state of molecules or something like that, then you should adjust your pre-box-allocation actions to be those that a one-box person would have, possibly even by forcing yourself to become one

Yes, I think it's more this scenario, where Omega predicts your actions based on molecules and such. In fact, the version I usually hear is that Omega simulates your brain beforehand to see what you do. Therefore, you have to one-box to get the $1 million.

This document goes through decision theories quite thoroughly, although I haven't read the majority of it yet myself.

Hutch wrote:He might predict this and thus not give you the ride, but then your mistake was letting Paul know that you're an act utilitarian, not in being one.

What if you've already let him on that you're an act-utilitarian by accident? Or what if he asks you directly if you're really going to keep your promise, so that if you answer you'll have to lie, and he'll see it?

Point is, being an act-utilitarian can have bad side-effects in some situations, and in possible worlds where these side-effects happen enough, you should stop being one altogether.

Hutch wrote:What, exactly, do you mean by "the person only ever actually feels +10"?

I only care about emotion that is consciously perceived. (Yes, some people disagree with this.) If the brain does the sum -10 + 20 before conscious awareness of what the two inputs were, then the person only ever actually felt +10 consciously, so only the +10 is morally relevant IMO.

Hutch wrote:I guess what I'm saying is that your model may be useful for everyday decisions, but it's not inherently the correct model, just a good approximation that's easier to use than the correct one.

Almost, but not quite what I mean. There is no "inherently correct model." Pain, anxiety, depression, fear, loneliness, anger, happiness, love, orgasm, flow, relief, excitement, etc. are all different emotions. In order to compare them, we have to make up exchange rates that we feel are sensible. There's no universally correct exchange rate between fear and love -- or for that matter, even between two types of fear.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: The correct type of utilitarianism

Postby DanielLC on 2012-06-10T22:45:00

Average vs. Aggregate:

Act utilitarianism fails axiom (1) quite badly.


You mean average utilitarianism?

Deciding: a philosophy should put all possible universes into a totally ordered set (of unknown cardinality).


Does this mean that given any two non-identical universes, one is always strictly better? For example, that given two universes that do not contain sentient beings, one is better than the other?

I'm against this, but there's an infinitesimal probability of it actually coming up.

The only other difference I have is what I'd call weighted animal welfare. I believe that some animals are more sentient than others, and therefore more important.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: The correct type of utilitarianism

Postby utilitymonster on 2012-06-10T22:54:00

Alan, based on what you've said here it seems misleading to describe yourself as any kind of negative utilitarian. It seems much more accurate to say that you're a total hedonistic utilitarian who believes that people have a cognitive bias that makes them systematically underweight painful episodes of their lives. This would seem to remove lots of confusion that ensues when people try to interpret your "non-pinprick negative utilitarianism." Your disagreement with standard utilitarians is about well-being, not about goodness of outcomes.

I think most ethicists would think that if someone described himself as a negative utilitarian, he wouldn't be saying that in terms of his own interests, pain was much more important than pleasure. Only that pain has much more weight when assigning moral value to outcomes. You have the opposite view, holding that people are often wrong about what is in their own interest, presumably owing to some kind of cognitive illusion. What is your story about the relevant cognitive illusion? (Alternatively, you might say that they aren't making any mistakes, but that you just happen to have different capacities for sympathy from most people.)

utilitymonster
 
Posts: 54
Joined: Wed Jan 20, 2010 12:57 am

Re: The correct type of utilitarianism

Postby Brian Tomasik on 2012-06-11T03:55:00

utilitymonster wrote:Alan, based on what you've said here it seems misleading to describe yourself as any kind of negative utilitarian. [...] This would seem to remove lots of confusion that ensues when people try to interpret your "non-pinprick negative utilitarianism."

Yes. A few things to say. I've never been a pinprick negative utilitarian. I have at times toyed with non-pinprick negative utilitarianism when the experience is as bad as, say, 2 minutes of burning at the stake. But most of the time I don't adhere to this position, because it causes a logical problem that many (Adriano, Pablo, Jonatas, etc.) have pointed out: Namely that 2 minutes of burning at the stake with temperature 1548 degrees C is certainly better than, say, 20 minutes of burning at the stake with temperature 1547 degrees C, which is certainly better than 200 minutes of burning at the stake at temperature 1546 degrees C, ..., which is better than 2 * 10^1500 minutes of burning at the stake at temperature 48 degrees C. But enduring a heat of 48 degrees C for 2 minutes sounds bearable and could be outweighed by a sufficient amount of happiness IMO. So yes, there is some amount of pleasure that would outweigh burning at the stake, but it might be a really, really high amount. :)

utilitymonster wrote:It seems much more accurate to say that you're a total hedonistic utilitarian who believes that people have a cognitive bias that makes them systematically underweight painful episodes of their lives.

Partly bias, yes, but it may also partly be just my preferences. When we talk about ethics, there is no true answer, so it's trickier to say what's a bias and what's just a difference of values.

utilitymonster wrote:You have the opposite view, holding that people are often wrong about what is in their own interest, presumably owing to some kind of cognitive illusion.

As noted above, I think it's a bit of both. Some is a cognitive illusion, but the rest is a true difference of values.

utilitymonster wrote:What is your story about the relevant cognitive illusion?

wishful thinking
rosy retrospection
optimism bias
depressive realism
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: The correct type of utilitarianism

Postby Brian Tomasik on 2012-06-11T04:35:00

Alan Dawrst wrote:So yes, there is some amount of pleasure that would outweigh burning at the stake, but it might be a really, really high amount. :)

BTW, this means that I'm going to end up acting like a negative utilitarian in almost all practical situations.

Here's something I wrote in an extremely long discussion on Facebook:
I love [the] argument that "it is clear that it is better to undergo an experience of unbearable agony for one second than to undergo an experience that is 99.9% as intense for a full century," etc. As I said, on most days I'm not a negative utilitarian, and this is part of why. I think there has to be a simple, linear exchange rate between suffering and happiness. No funky stuff. [That said, exactly what numerical values you assign to different experiences are free parameters. I assign very big negative values to very bad experiences, while the negative values for more minor bad experiences aren't so far from zero.]

Where I differ with the optimists is on the empirical details. For example, [someone above] said: "But if you tell someone that every second of their suffering is being exchanged for more than a year of wonderful experiences, it may seem quite intuitive to find it good. And it is true, according to my prediction of the future." This implies that the future will in expectation have at least ~32 million times as much happiness as suffering.

<aside>I think this exchange rate may be almost right if the experiences are really, really good. If the experiences are on the level of ordinary life, I might want to increase it to ~100 years. I reserve all rights to waffle on these figures.</aside>

However, I think it's practically impossible that we could believe the future will have 7 orders of magnitude more happiness than suffering *in expectation*. Certainly there exist many possible futures where this is the case (e.g., successful utilitronium shockwave). But whenever you have massive computing power around, you have the potential for massive suffering, and the probability that things go very badly has to be more than 1 in 32 million.

What if war breaks out and the sides start torturing each other in hells? What if 1 in 32 million members of the population are sadists and torture their sims for entertainment (one, two, three)? What if religious fundamentalists seize control and send the majority of the population to fire and brimstone? ("For wide is the gate and broad is the road that leads to destruction, and many enter through it." Matthew 7:13) There are many other scenarios in which maximal torment would be inflicted upon astronomical numbers of minds, and I don't think we can ever be certain these will happen with frequency less than 1 in 32 million.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: The correct type of utilitarianism

Postby Hutch on 2012-06-11T05:37:00

DanielLC wrote:
Average vs. Aggregate:

Act utilitarianism fails axiom (1) quite badly.


You mean average utilitarianism?


Yup, my bad.

DanielLC wrote:
Deciding: a philosophy should put all possible universes into a totally ordered set (of unknown cardinality).


Does this mean that given any two non-identical universes, one is always strictly better? For example, that given two universes that do not contain sentient beings, one is better than the other?

I'm against this, but there's an infinitesimal probability of it actually coming up.



Totally ordered means every two universes are comparable, and that the relation is transitive, anti symmetric, and reflexive. Two different universes are still allowed to be tied in terms of total utility.

DanielLC wrote:
The only other difference I have is what I'd call weighted animal welfare. I believe that some animals are more sentient than others, and therefore more important.


I guess that what I think is that that's just another way of saying that some animals may have much greater effective emotional ranges than others; if an animal is less sentient, then perhaps it will naturally feel less subjective suffering, which is already built in to their utility function (it's not enforced that all animals have the same emotional range). That being said, I don't think you should re-weight by sentience in addition to utility.

Hutch
 
Posts: 40
Joined: Sun Jun 10, 2012 9:58 am
Location: Boston

Re: The correct type of utilitarianism

Postby Hutch on 2012-06-11T05:45:00

Alan Dawrst wrote:
Alan Dawrst wrote:So yes, there is some amount of pleasure that would outweigh burning at the stake, but it might be a really, really high amount. :)

BTW, this means that I'm going to end up acting like a negative utilitarian in almost all practical situations.

Here's something I wrote in an extremely long discussion on Facebook:
I love [the] argument that "it is clear that it is better to undergo an experience of unbearable agony for one second than to undergo an experience that is 99.9% as intense for a full century," etc. As I said, on most days I'm not a negative utilitarian, and this is part of why. I think there has to be a simple, linear exchange rate between suffering and happiness. No funky stuff.

Where I differ with the optimists is on the empirical details. For example, [someone above] said: "But if you tell someone that every second of their suffering is being exchanged for more than a year of wonderful experiences, it may seem quite intuitive to find it good. And it is true, according to my prediction of the future." This implies that the future will in expectation have at least ~32 million times as much happiness as suffering.

<aside>I think this exchange rate may be almost right if the experiences are really, really good. If the experiences are on the level of ordinary life, I might want to increase it to ~100 years. I reserve all rights to waffle on these figures.</aside>

However, I think it's practically impossible that we could believe the future will have 7 orders of magnitude more happiness than suffering *in expectation*. Certainly there exist many possible futures where this is the case (e.g., successful utilitronium shockwave). But whenever you have massive computing power around, you have the potential for massive suffering, and the probability that things go very badly has to be more than 1 in 32 million.

What if war breaks out and the sides start torturing each other in hells? What if 1 in 32 million members of the population are sadists and torture their sims for entertainment (one, two, three)? What if religious fundamentalists seize control and send the majority of the population to fire and brimstone? ("For wide is the gate and broad is the road that leads to destruction, and many enter through it." Matthew 7:13) There are many other scenarios in which maximal torment would be inflicted upon astronomical numbers of minds, and I don't think we can ever be certain these will happen with frequency less than 1 in 32 million.


I agree with almost all of what you're saying, except perhaps about values (I'm not sure exactly what you mean by this). Say that you're in the following situation: someone with almost total knowledge of what is going to happen tells you that you have two options for actions, A and B. He says that A will certainly (i.e. 100% chance) have greater total utility than B, but it will also have higher standard deviation of emotions, and thus more negative ones. He recognizes that it's easier to get really bad emotions than really good ones, but he is certain that in this case A has higher total utility. You trust this statement of his. Which option do you choose? (If you want a more precise scenario: A has 10 people with -5 happiness and 10 people with +10 happiness; B has 20 people with 1 happiness.)

I agree that in practice you won't be as certain as this guy is and so often it's useful to worry much more about suffering than happiness (the difference between the 50 percentile and 80 percentile of Americans in terms of happiness is dwarfed by the pain many animals, both farmed and wild, feel). So I guess my question is, is this more than a heuristic for you? In the above scenario, would you choose option A or option B?

Hutch
 
Posts: 40
Joined: Sun Jun 10, 2012 9:58 am
Location: Boston

Re: The correct type of utilitarianism

Postby DanielLC on 2012-06-11T07:11:00

I guess that what I think is that that's just another way of saying that some animals may have much greater effective emotional ranges than others;


Not quite. A difference in sentience matters for anthropic reasoning. If animals are less sentient, you're proportionally less likely to be an animal. The same does not apply with emotional ranges.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: The correct type of utilitarianism

Postby Hutch on 2012-06-11T08:01:00

Alan Dawrst wrote:
Hutch wrote:For instance, do you mean people who did good or meant to do good?

Meant to do good.

(Note: I'm not arguing for the position you mention; I'm just playing devil's advocate.)

Hutch wrote:Meant to do good is obviously hopeless to define

Why? Juries make these assessments all the time. I would conjecture without evidence that the inter-jury agreement rate is pretty high on such matters.



Who exactly doesn't mean to do good? Did Hitler mean to do good? He probably did. How about Osama Bin Laden? Did he mean to do good? He certainly thought he was doing good; he gave up his life to do it. How about a random person who doesn't think in utilitarian terms, and instead things that people should work to maximize their own utility, and does so reasonable well (while defecting on all prisoners' dilemmas)? Did they mean to do good? It's true that people might generally agree on who "meant to do good", but I think they would end up deciding lots of people "meant to do good" in their life, they just may have had a misguided definition of good.


Alan Dawrst wrote:
Hutch wrote:In order to measure this you need to have some notion of what the world "would have been like" without them; a concept very ill defined, as you have to somehow choose which counterfactual universe you're comparing this one too.

But this is no different from when we evaluate any utilitarian choice.


Not quite: in a normal utilitarian choice you look at all possible actions and choose the one that maximized utility. Here, you'd have to look at all possible counterfactual universes and choose the one "that is basically like ours except without the person". This isn't well defined, but I grant that it's often reasonably easy to approximate: except for long domino effects.

Alan Dawrst wrote:
Hutch wrote:what are you defining rule utilitarianism as?

I'm not the best academic expert on real utilitarian philosophers, so don't quote my answer in that context. However, what I mean by rule utilitarianism is that there are some cases where you should stick to pre-decided rules of action even if it turns out that in a particular case, doing so seems suboptimal. For example: "If you make a promise, always keep it." Once Paul Ekman takes you out of the desert, it might seem suboptimal to actually pay him the $50 instead of donating it to a better cause, but the rule about keeping your promises would advise you to do so anyway.

Another example is if you're playing prisoner's dilemma and commit to a rule to punish defections. Once your partner has defected, it might seem like the damage has been done, and you shouldn't cause more suffering. But globally, following your rule without wavering might be better for everyone.

In general, it can be really useful in game theory to be able to make binding commitments that the other side knows you'll stick to no matter what.



As others have said, I completely agree that in practice following rules can be a good idea. Even though stealing might sometimes be justified in the abstract, in practice it basically never is because it breaks a rule that society cares a lot about and so comes with lots of consequences like jail. That being said, I think that you should, in the end, be an act utilitarian, even if you often think like a rule utilitarian; here what you're doing is basically saying that society puts up disincentives for braking rules and those should be included in the act utilitarian calculation, but sometimes they're big enough that a rule utilitarian calculation approximates it pretty well in a much simpler fashion.

Alan Dawrst wrote:
Hutch wrote:If he's just really good at predicting things based on the state of molecules or something like that, then you should adjust your pre-box-allocation actions to be those that a one-box person would have, possibly even by forcing yourself to become one

Yes, I think it's more this scenario, where Omega predicts your actions based on molecules and such. In fact, the version I usually hear is that Omega simulates your brain beforehand to see what you do. Therefore, you have to one-box to get the $1 million.

This document goes through decision theories quite thoroughly, although I haven't read the majority of it yet myself.

Hutch wrote:He might predict this and thus not give you the ride, but then your mistake was letting Paul know that you're an act utilitarian, not in being one.

What if you've already let him on that you're an act-utilitarian by accident? Or what if he asks you directly if you're really going to keep your promise, so that if you answer you'll have to lie, and he'll see it?

Point is, being an act-utilitarian can have bad side-effects in some situations, and in possible worlds where these side-effects happen enough, you should stop being one altogether.



I'd say more precisely that *seeming like* an act utilitarian or sometimes even *trying to think like* an act utilitarian is sometimes not the act utilitarian thing to do; sometimes act utilitarianism says that you should basically think like a rule utilitarian, and look like one to others.

Alan Dawrst wrote:
Hutch wrote:What, exactly, do you mean by "the person only ever actually feels +10"?

I only care about emotion that is consciously perceived. (Yes, some people disagree with this.) If the brain does the sum -10 + 20 before conscious awareness of what the two inputs were, then the person only ever actually felt +10 consciously, so only the +10 is morally relevant IMO.

Hutch wrote:I guess what I'm saying is that your model may be useful for everyday decisions, but it's not inherently the correct model, just a good approximation that's easier to use than the correct one.

Almost, but not quite what I mean. There is no "inherently correct model." Pain, anxiety, depression, fear, loneliness, anger, happiness, love, orgasm, flow, relief, excitement, etc. are all different emotions. In order to compare them, we have to make up exchange rates that we feel are sensible. There's no universally correct exchange rate between fear and love -- or for that matter, even between two types of fear.


Yeah, in the end the universe does not have a preference for moral codes; in the end "should" is just a word humans made up. But as long as we're making it up we might as well try to define exactly what it means (i.e. choose some "correct" model which is as precise as possible to represent it); otherwise we risk it becoming a tool for us to say that what we want to do is what we should do. (Which, I guess, is only wrong if you accept some definitions of "should"...)

Hutch
 
Posts: 40
Joined: Sun Jun 10, 2012 9:58 am
Location: Boston

Re: The correct type of utilitarianism

Postby Arepo on 2012-06-11T11:27:00

I'd like to engage with this properly when I have more time. For now I'll just say a couple of things:

Hutch wrote:Anyway, I posit that the correct form of utilitarianism is act, aggregate, classical, hedonistic, animal welfare (i.e. all beings that feel pain and pleasure) utilitarianism. (Someone remind me if I've forgotten to specify one of the many divides...)


Do you mean 'classical' as just a negation of 'negative'? If so I suggest picking a different term - 'classical' to me basically means 'hedonistic', but with enough historically induced vagueness that I try to avoid it altogether and just say 'hedonistic' when that's what I mean.

'Act' is a bit vague. I agree with Toby Ord's view on global consequentialism, but I'm not sure he's saying anything pre-rule utilitarians didn't actually believe (IIRC he makes this admission), and thus that RU was basically constructed to fight a strawman. So while I sort of agree that AUs make more sense than RUs, I think that applying the prefix 'act' is already ceding some ground to RUs that they didn't deserve to take. So while applying 'global' as an alternative prefix feels like the same concession, at least it puts the debate back in our terms.

'Scalar' should be in there somewhere too (again this feels like it should be unnecessary, but since the prefix was applied by someone who in my view is on the right side of the debate, I don't object to using it). (Norcross has quite a few of his papers available on his site, though sadly not the main scalar one, and he's a very entertaining writer for a philosopher)
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: The correct type of utilitarianism

Postby Brian Tomasik on 2012-06-11T11:52:00

Hutch wrote:Which option do you choose? (If you want a more precise scenario: A has 10 people with -5 happiness and 10 people with +10 happiness; B has 20 people with 1 happiness.)

I contend that assigning these numbers in the first place is what is subjective and depends on your values. Once you assign the numbers, then obviously you maximize the expected value. But an experience that you think is a -1 I might think is a -100. There is no "real answer" to what the number is.

Hutch wrote:I think they would end up deciding lots of people "meant to do good" in their life, they just may have had a misguided definition of good.

There are many cases where I know I'm not doing good according to my values. I think this is true for almost everyone on the planet at some point or another.

I'm not sure what people who endorse an intention-based punishment scheme would say about Hitler.

Hutch wrote:That being said, I think that you should, in the end, be an act utilitarian, even if you often think like a rule utilitarian; here what you're doing is basically saying that society puts up disincentives for braking rules and those should be included in the act utilitarian calculation, but sometimes they're big enough that a rule utilitarian calculation approximates it pretty well in a much simpler fashion.

The argument for rule utilitarianism is a little stronger, though. In some cases you need to actually be a rule utilitarian, not just act like one most of the time. If the Paul Eckman scenario is a one-shot deal that will never happen again, and no one will ever find out how it turned out, you still need to be someone who will make good on your promise once he rescues you.

Now, whether act utilitarians or rule utilitarians win more often depends on the distributional frequency of different types of scenarios. In some possible worlds, act utilitarians win more. In other possible worlds, rule utilitarians win more. Your choice of decision theory is relative to the underlying rates of situations in your particular world. (This was a huge insight to me when I first realized it.)

Hutch wrote:sometimes act utilitarianism says that you should basically think like a rule utilitarian, and look like one to others.

Perhaps we agree, then. In the Newcomb case where Omega simulates your brain, you can't win unless you actually are a rule utiltiarian; it doesn't suffice just to look like one.

Hutch wrote:But as long as we're making it up we might as well try to define exactly what it means

Agreed. I have very specific feelings about how I want the universe to be.

During debates on moral realism, it's sometimes assumed that if morality isn't objective, then nothing really matters. In practice, my moral convictions are just as strong as anyone else's. (It's reminiscent of theists imagining that if they stopped believing in God then they would no longer care if people torture babies.)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: The correct type of utilitarianism

Postby Hutch on 2012-06-11T17:03:00

Arepo wrote:I'd like to engage with this properly when I have more time. For now I'll just say a couple of things:

Hutch wrote:Anyway, I posit that the correct form of utilitarianism is act, aggregate, classical, hedonistic, animal welfare (i.e. all beings that feel pain and pleasure) utilitarianism. (Someone remind me if I've forgotten to specify one of the many divides...)


Do you mean 'classical' as just a negation of 'negative'? If so I suggest picking a different term - 'classical' to me basically means 'hedonistic', but with enough historically induced vagueness that I try to avoid it altogether and just say 'hedonistic' when that's what I mean.



Yup, sorry about the vague language. In this context I just meant not NU.

Arepo wrote:
'Act' is a bit vague. I agree with Toby Ord's view on global consequentialism, but I'm not sure he's saying anything pre-rule utilitarians didn't actually believe (IIRC he makes this admission), and thus that RU was basically constructed to fight a strawman. So while I sort of agree that AUs make more sense than RUs, I think that applying the prefix 'act' is already ceding some ground to RUs that they didn't deserve to take. So while applying 'global' as an alternative prefix feels like the same concession, at least it puts the debate back in our terms.



Yeah, I see what you're saying; in some sense using the word "act" makes it seem like there are lots of valid things to look at. (And it looks like global utilitarianism is just another name for act utilitarianism.)

Arepo wrote:

'Scalar' should be in there somewhere too (again this feels like it should be unnecessary, but since the prefix was applied by someone who in my view is on the right side of the debate, I don't object to using it). (Norcross has quite a few of his papers available on his site, though sadly not the main scalar one, and he's a very entertaining writer for a philosopher)


Yeah, scalar should be there too. (Are there people who don't like scalar?)

Hutch
 
Posts: 40
Joined: Sun Jun 10, 2012 9:58 am
Location: Boston

Re: The correct type of utilitarianism

Postby Hutch on 2012-06-11T17:23:00

Alan Dawrst wrote:
Hutch wrote:Which option do you choose? (If you want a more precise scenario: A has 10 people with -5 happiness and 10 people with +10 happiness; B has 20 people with 1 happiness.)

I contend that assigning these numbers in the first place is what is subjective and depends on your values. Once you assign the numbers, then obviously you maximize the expected value. But an experience that you think is a -1 I might think is a -100. There is no "real answer" to what the number is.



Yeah, I think that's right; in the end it's sometimes hard to quantify utility.

Alan Dawrst wrote:
Hutch wrote:I think they would end up deciding lots of people "meant to do good" in their life, they just may have had a misguided definition of good.

There are many cases where I know I'm not doing good according to my values. I think this is true for almost everyone on the planet at some point or another.

I'm not sure what people who endorse an intention-based punishment scheme would say about Hitler.



Yeah, there are for me as well; I suspect, though, that there might not be a positive correlation between people who sometimes know they're acting immorally and people who do act immorally: in some sense knowing that you're acting immorally shows that you care about morality, and think about it. There might be an issue where lots of immoral assholes don't think about morality and when they do think that what's moral is to do whatever makes themselves happiest...

Alan Dawrst wrote:
Hutch wrote:That being said, I think that you should, in the end, be an act utilitarian, even if you often think like a rule utilitarian; here what you're doing is basically saying that society puts up disincentives for braking rules and those should be included in the act utilitarian calculation, but sometimes they're big enough that a rule utilitarian calculation approximates it pretty well in a much simpler fashion.

The argument for rule utilitarianism is a little stronger, though. In some cases you need to actually be a rule utilitarian, not just act like one most of the time. If the Paul Eckman scenario is a one-shot deal that will never happen again, and no one will ever find out how it turned out, you still need to be someone who will make good on your promise once he rescues you.



I think I fundamentally think the Paul Eckman scenario is poorly defined in a way that renders it meaningless; it's similar to how I feel about newcomb, though, so I'll just reply to that one...

Alan Dawrst wrote:
Now, whether act utilitarians or rule utilitarians win more often depends on the distributional frequency of different types of scenarios. In some possible worlds, act utilitarians win more. In other possible worlds, rule utilitarians win more. Your choice of decision theory is relative to the underlying rates of situations in your particular world. (This was a huge insight to me when I first realized it.)



It's possible that we're just arguing over terminology here, but anyway here's my take on it. Fundamentally newcomb-like situations all stipulate that someone "knows what you're going to do" and "takes actions based on what you're going to do". The question, though, is how they "know" this. If they "know it" in that they're pretty sure because they're good at reading facial expressions or something like that, then you should just do whatever it takes to make your facial expression that of a rule utilitarian, even if that means convincing yourself that you are one. If "knowing it" means something like "they have lots of previous data", then it doesn't matter what you do with your facial expressions--he's already decided what he's going to do, so you might as well take both boxes. (Unless some of that previous data is about you, in which case, according to act utilitarianism, you should have lived your life so as to convince people you're a rule utilitarian, even if that means convincing yourself of it.) But if it's neither of those things--if he absolutely knows what you're going to do--then either one of two things is true. The first is that you're treating the universe as deterministic, in which case *the question of what to do or what to believe is irrelevant because your actions are already determined anyway*. The second is that you're somehow simultaneously assuming that he knows what you're going to do but you have a choice about what to do--a contradiction, meaning that the scenario is a non-physical scenario.

So there may be scenarios where it's in an act utilitarian's interest to act a lot like a rule utilitarian, or even convince themselves they are, but there are no actually physically meaningful and non-contradictory scenarios where you should fundamentally be a rule utilitarian (unless, obviously, act and rule give the same optimal action).

Alan Dawrst wrote:
Hutch wrote:sometimes act utilitarianism says that you should basically think like a rule utilitarian, and look like one to others.

Perhaps we agree, then. In the Newcomb case where Omega simulates your brain, you can't win unless you actually are a rule utiltiarian; it doesn't suffice just to look like one.

Hutch wrote:But as long as we're making it up we might as well try to define exactly what it means

Agreed. I have very specific feelings about how I want the universe to be.

During debates on moral realism, it's sometimes assumed that if morality isn't objective, then nothing really matters. In practice, my moral convictions are just as strong as anyone else's. (It's reminiscent of theists imagining that if they stopped believing in God then they would no longer care if people torture babies.)


I completely agree on this. It can be a little bit scary to realize that you can't just get your morality from a really old book that lots of people trust in such a way that you won't really be questioned or challenged, but in the end it doesn't mean we're less moral than them, just that we've taken responsibility for deciding what our actions should be, and it's really important not to cede morality to people who get theirs from a two thousand year old book full of made up stories.

As you say, it's also important not to let people get away with the "morality is just subjective anyway" or "morality doesn't mean anything anyway" line. The same people who use these lines about things like animal welfare would be seriously offended if someone used it about human slavery, despite their claims of individually subjective or meaningless morality.

Hutch
 
Posts: 40
Joined: Sun Jun 10, 2012 9:58 am
Location: Boston

Re: The correct type of utilitarianism

Postby Arepo on 2012-06-12T13:57:00

More – not very well ordered, just responding to points as I read them due to time pressures:

(I basically seem to agree with Hutch on everything. Which lake did you arise from, buddy? :P)

Hutch wrote:No intuition-fudge factors


Agree wholeheartedly on all your criteria in principle, but this one's difficult to apply in practice. I envisage happiness in terms of hedons, where one hedon can be defined as one of two things (ideally we'd have different words for each):

A) The smallest/simplest material structure with which the material world can generate any positive experience.
B) The smallest amount of positive experience which can possibly exist in isolation given physical constraints.

In practice the two are quite similar, and might well be the same thing, though it gives me pause: B is conceptually more accurate, but whereas we might be able to scientifically pin down A, it's hard to imagine ever proving that some minimal amount of experienced happiness is greater or less than some other. Happiness itself can't communicate, and communication is about the only means we have of establishing what anything other than our current selves feels. So if we suspect that A and B can differ, it gives us a serious theoretical challenge - can we conceive of a way of establishing what B is eve given perfect technology, such that we can theoretically hope to quantify happiness?

If the answer is 'no', I don't think it's a huge *practical* challenge - we just get on with doing the best we can based on the information we have - but it's quite unsatisfying, and might make util hard to sell as a scientific principle.

A bigger problem comes when you consider anti-hedons, which are the inverse of the above (call them -A and -B). I cannot think of any way in which we could show that -A and A actually sum to 0 experienced happiness, nor does it seem obvious that they should do so. But whereas considering hedons only we can treat B as an ideal which, even if we can't isolate, we can guesstimate in many ways, it's far from clear that B and -B (the ideal case) should sum to 0 happiness either.

It's a potentially disastrous aspect of what basically seems like at the very least the most scientific ethical view possible that it seems almost arbitrary even at a theoretical level. It's hard to think of any way we could say someone who thinks -B and B sum to -B, to B, or to anything in between is actually wrong. But without any way to reach such agreement, it seems like the whole practice is massively flawed – pure negative utilitarians and pure positive utilitarians would end up thwarting each other at every turn, with everyone in between, for no other reason than that they picked some point on the scale arbitrarily, and those in between could still have huge practical disagreements.

It bears saying that preference utilitarians don’t escape this conundrum – they still require on some sort of preferilon and anti-preferilon concepts to make sense of their ideas.

I haven’t managed to think of a solution to this problem, except to suggest we just assume that B -B does = 0. Or we could assert that B -B = either B, -B or 0 (anything else being fully arbitrary), and trust that the first two are so obviously contrary to our experience that we cannot but treat them as false.

This might also help to clarify some terminology: anyone who thinks B -B <0 is a negative utilitarian in some sense, but obviously some powerfully different sense to someone who thinks B -B = -B. Maybe we should create a different term for the latter to avoid confusion – ‘terminal utilitarian’ (neater than ‘pinprick negative utilitarian’, IMO :P), or something?

prior existence puts up yet another type of arbitrary barrier between those whose happiness we care about and those we don't.


As far as I can see, ‘prior existence’ isn’t really an opposition to total. It’s not really even a type of utilitarianism, in the sense that it doesn’t behave like an algorithm. It’s utilitarianism with an extra rule added, fundamentally as arbitrary as saying ‘maximise all happiness except that of people eating ice-cream’.

He might predict this and thus not give you the ride, but then your mistake was letting Paul know that you're an act utilitarian, not in being one.


Alan wrote: In some cases you need to actually be a rule utilitarian, not just act like one most of the time. If the Paul Eckman scenario is a one-shot deal that will never happen again, and no one will ever find out how it turned out, you still need to be someone who will make good on your promise once he rescues you.


I described this to a friend, and he immediately pointed out that what we’re thinking of as ‘decision theory’ is basically irrelevant in the original example – what matters is your ability to persuade the driver that you’ll pay him, which in the real world you might fail to do even if you intend to or succeed in even if you don’t.

What if you've already let him on that you're an act-utilitarian by accident? Or what if he asks you directly if you're really going to keep your promise, so that if you answer you'll have to lie, and he'll see it?

Point is, being an act-utilitarian can have bad side-effects in some situations, and in possible worlds where these side-effects happen enough, you should stop being one altogether.


I don’t understand what activity this reasoning is supposed to persuade anyone to take. Is it that one just shouldn’t ‘be an act utilititarian’ just in case they hit situations like this? Is it that they can be an act utilitarian but if they do hit situations like this they should instantly switch to being a promise-keeper? (isn’t that still making the act-utilitarian decision) Is it same but they should switch iff they know the driver can read them near-perfectly?

All of these seem deeply implausible situations to me.

Hutch wrote:Perhaps this was because you've done this before, but then not giving him money the previous time was possibly not the correct decision according to act utilitarianism


Or it was just bad luck, as is perfectly consistent with (and indeed, basically guaranteed in) a world where you always act to maximise expectation but have imperfect knowledge.

there's some complex emotional process going on in his head


This relates to the recent utilitarianism and masochism thread. Do we basically deny that any entity can concurrently experience positive and negative utility? Or (for more plausibly in my view) do we see most ‘entities’ as a localised collection of utilons and antiutilons (on either above definition), in which case if extreme negative utilitarians are correct, even the happiest person is better off dead?

By the way, my off-the-cuff impression of decision theories is that they're a load of crap dreamed up by people who kept forgetting to include other peoples' reactions to their actions into their evaluation of decisions, but I'll have to spend a bit more time to have a more thorough reply. Half of the situations invented to require them seem to be non-paradoxes, and the other half seem to be ill-defined. (e.g. it's not well defined how the alien in the one box/two box paradox "knows" what you're going to do. If he can read your mind then you should think really hard about how you're only going to pick one box, or possibly try to convince yourself you will to make it more convincing, but when push comes to shove you'll obviously take two boxes if you're intelligent (and never have to play this game again; otherwise it could be more like a iterated prisoners' dilemma). If he's just really good at predicting things based on the state of molecules or something like that, then you should adjust your pre-box-allocation actions to be those that a one-box person would have, possibly even by forcing yourself to become one--but in the end there isn't much of a paradox here, just a person who's punishing you depending on how you act and appear, another thing you should take into your calculation.)


This very closely echoes my views. I don’t want to dismiss the whole idea of decision theory per se (fundamentally it must be just identifying algorithms best suited to meeting our goals), but pretty much everything I’ve actually read about it merits these criticisms.

Alan wrote: I'm not the best academic expert on real utilitarian philosophers


To be honest I suspect that we’ve made more headway on and around this forum than real academic philosophers have in the same amount of time (and possibly a lot longer); every time I read academic papers re utilitarian philosophy they’re so bound up in dealing with the sort of pseudophilosophy we just ignore that after 50 pages they end up making a point that all of us would have agreed/disagreed on after about a paragraph’s discussion. I like to think they occasionally to say things like ‘I’m not the best Dawrstian expert on negative utilitarian philosophy’ ;)

For example: "If you make a promise, always keep it."


The trouble is, no rule utilitarian believes this. They always allow exceptions to their rules when you probe them enough. But as far as I can tell they never offer an algorithm for when you should break a rule, just a few example cases. Presumably such a rule would be based around optimising total utility, but then unsurprisingly enough, it lapses back into global/act utilitarianism.

In fact, the version I usually hear is that Omega simulates your brain beforehand to see what you do.


It seems like the inherent randomness of quantum physics would make this impossible. If you simply assert that it is possible, then the problem becomes uninteresting if not nonsensical. In a universe where 1 is greater than 2, I’d obviously prefer to have 1 utilon to 2, but what do such pseudo-impossibilities have to do with doing the best we can in the world we know?

I only care about emotion that is consciously perceived.


By whom? The homunculus sitting inside your brain? Why don’t utilons and antiutilons get to perceive themselves?

Your choice of decision theory is relative to the underlying rates of situations in your particular world. (This was a huge insight to me when I first realized it.)


But ‘your choice of decision theory’ is in itself a decision. How do you determine how to make it?

DanielLC wrote:If animals are less sentient, you're proportionally less likely to be an animal. The same does not apply with emotional ranges.


This still sounds the same once you disregard essential identity as a meaningful consideration; you’re basically saying an animal comprises a smaller collection of hedons and antihedons than a human does. In which case a human can presumably have more total hedons or more total antihedons, and given 1 human and 1 animal, any given hedon/antihedon among them is, a priori, more likely to be in the human.

Hutch wrote:(Are there people who don't like scalar?)


I don’t know of any utilitarians who don’t claim to have always assumed it. Non-utilitarian philosophers often seem to characterise utilitarianism in a non-scalar way (eg a standard Ethics 101 textbook is likely to have a line like ‘Utilitarians believe that an act is right iff it maximises total happiness’).

There might be an issue where lots of immoral assholes don't think about morality and when they do think that what's moral is to do whatever makes themselves happiest...


I like to call them Randroids ;)
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: The correct type of utilitarianism

Postby Hutch on 2012-06-12T14:40:00

Arepo wrote:More – not very well ordered, just responding to points as I read them due to time pressures:

(I basically seem to agree with Hutch on everything. Which lake did you arise from, buddy? :P)



Wobegon.

(My sleepless brain is too absent to parse the question, or whether or not the question is rhetorical, so instead I opted for what I convinced myself was a clever reply.)

Arepo wrote:
Hutch wrote:No intuition-fudge factors


Agree wholeheartedly on all your criteria in principle, but this one's difficult to apply in practice. I envisage happiness in terms of hedons, where one hedon can be defined as one of two things (ideally we'd have different words for each):

A) The smallest/simplest material structure with which the material world can generate any positive experience.
B) The smallest amount of positive experience which can possibly exist in isolation given physical constraints.

In practice the two are quite similar, and might well be the same thing, though it gives me pause: B is conceptually more accurate, but whereas we might be able to scientifically pin down A, it's hard to imagine ever proving that some minimal amount of experienced happiness is greater or less than some other. Happiness itself can't communicate, and communication is about the only means we have of establishing what anything other than our current selves feels. So if we suspect that A and B can differ, it gives us a serious theoretical challenge - can we conceive of a way of establishing what B is eve given perfect technology, such that we can theoretically hope to quantify happiness?

If the answer is 'no', I don't think it's a huge *practical* challenge - we just get on with doing the best we can based on the information we have - but it's quite unsatisfying, and might make util hard to sell as a scientific principle.

A bigger problem comes when you consider anti-hedons, which are the inverse of the above (call them -A and -B). I cannot think of any way in which we could show that -A and A actually sum to 0 experienced happiness, nor does it seem obvious that they should do so. But whereas considering hedons only we can treat B as an ideal which, even if we can't isolate, we can guesstimate in many ways, it's far from clear that B and -B (the ideal case) should sum to 0 happiness either.

It's a potentially disastrous aspect of what basically seems like at the very least the most scientific ethical view possible that it seems almost arbitrary even at a theoretical level. It's hard to think of any way we could say someone who thinks -B and B sum to -B, to B, or to anything in between is actually wrong. But without any way to reach such agreement, it seems like the whole practice is massively flawed – pure negative utilitarians and pure positive utilitarians would end up thwarting each other at every turn, with everyone in between, for no other reason than that they picked some point on the scale arbitrarily, and those in between could still have huge practical disagreements.

It bears saying that preference utilitarians don’t escape this conundrum – they still require on some sort of preferilon and anti-preferilon concepts to make sense of their ideas.

I haven’t managed to think of a solution to this problem, except to suggest we just assume that B -B does = 0. Or we could assert that B -B = either B, -B or 0 (anything else being fully arbitrary), and trust that the first two are so obviously contrary to our experience that we cannot but treat them as false.

This might also help to clarify some terminology: anyone who thinks B -B <0 is a negative utilitarian in some sense, but obviously some powerfully different sense to someone who thinks B -B = -B. Maybe we should create a different term for the latter to avoid confusion – ‘terminal utilitarian’ (neater than ‘pinprick negative utilitarian’, IMO :P), or something?



Yeah, in the end the universe does not have a preference for a moral theory; in the end, scientifically, the world "should" is meaningless. But I agree that it's important to be as scientifically legitimate as possible about it, including trying not to introduce more clauses and risk over-fitting a model to give the results you want. I generally don't think of happiness as being quantized, and instead just use a util (or hedon or whatever) as an arbitrary unit. And yeah, the sum of B and -B is a good way of characterizing types of NU.

Arepo wrote:

prior existence puts up yet another type of arbitrary barrier between those whose happiness we care about and those we don't.


As far as I can see, ‘prior existence’ isn’t really an opposition to total. It’s not really even a type of utilitarianism, in the sense that it doesn’t behave like an algorithm. It’s utilitarianism with an extra rule added, fundamentally as arbitrary as saying ‘maximise all happiness except that of people eating ice-cream’.



Yeah, I agree that it's bullshit, but unfortunately a lot of folk-utilitarians (i.e. people who call themselves utilitarians but aren't really rigorous about it and don't actually apply it) like prior existence...

Arepo wrote:
He might predict this and thus not give you the ride, but then your mistake was letting Paul know that you're an act utilitarian, not in being one.


Alan wrote: In some cases you need to actually be a rule utilitarian, not just act like one most of the time. If the Paul Eckman scenario is a one-shot deal that will never happen again, and no one will ever find out how it turned out, you still need to be someone who will make good on your promise once he rescues you.


I described this to a friend, and he immediately pointed out that what we’re thinking of as ‘decision theory’ is basically irrelevant in the original example – what matters is your ability to persuade the driver that you’ll pay him, which in the real world you might fail to do even if you intend to or succeed in even if you don’t.

What if you've already let him on that you're an act-utilitarian by accident? Or what if he asks you directly if you're really going to keep your promise, so that if you answer you'll have to lie, and he'll see it?

Point is, being an act-utilitarian can have bad side-effects in some situations, and in possible worlds where these side-effects happen enough, you should stop being one altogether.


I don’t understand what activity this reasoning is supposed to persuade anyone to take. Is it that one just shouldn’t ‘be an act utilititarian’ just in case they hit situations like this? Is it that they can be an act utilitarian but if they do hit situations like this they should instantly switch to being a promise-keeper? (isn’t that still making the act-utilitarian decision) Is it same but they should switch iff they know the driver can read them near-perfectly?

All of these seem deeply implausible situations to me.



Yeah, I agree; it seems to me like it may be confusing useful heuristics for ultimate guiding principles...

Arepo wrote:
Hutch wrote:Perhaps this was because you've done this before, but then not giving him money the previous time was possibly not the correct decision according to act utilitarianism


Or it was just bad luck, as is perfectly consistent with (and indeed, basically guaranteed in) a world where you always act to maximise expectation but have imperfect knowledge.



True, good point.

Arepo wrote:
there's some complex emotional process going on in his head


This relates to the recent utilitarianism and masochism thread. Do we basically deny that any entity can concurrently experience positive and negative utility? Or (for more plausibly in my view) do we see most ‘entities’ as a localised collection of utilons and antiutilons (on either above definition), in which case if extreme negative utilitarians are correct, even the happiest person is better off dead?



Yup, another problem with it. Basically, if your utility function isn't additive (i.e. U(e_1)+U(e_2)=U(e_1+e_2), where e_x is the subjective experience of some event x), you're going to have issues where you can get any result you want by dividing a total experience into a particular set of sub-experiences.

Arepo wrote:
By the way, my off-the-cuff impression of decision theories is that they're a load of crap dreamed up by people who kept forgetting to include other peoples' reactions to their actions into their evaluation of decisions, but I'll have to spend a bit more time to have a more thorough reply. Half of the situations invented to require them seem to be non-paradoxes, and the other half seem to be ill-defined. (e.g. it's not well defined how the alien in the one box/two box paradox "knows" what you're going to do. If he can read your mind then you should think really hard about how you're only going to pick one box, or possibly try to convince yourself you will to make it more convincing, but when push comes to shove you'll obviously take two boxes if you're intelligent (and never have to play this game again; otherwise it could be more like a iterated prisoners' dilemma). If he's just really good at predicting things based on the state of molecules or something like that, then you should adjust your pre-box-allocation actions to be those that a one-box person would have, possibly even by forcing yourself to become one--but in the end there isn't much of a paradox here, just a person who's punishing you depending on how you act and appear, another thing you should take into your calculation.)


This very closely echoes my views. I don’t want to dismiss the whole idea of decision theory per se (fundamentally it must be just identifying algorithms best suited to meeting our goals), but pretty much everything I’ve actually read about it merits these criticisms.



Yeah. I think I put this in another post here (and forgot to put it there), but in addition to forgetting to include reactions in their calculations, a lot of the appearance of meaningfulness of decision theories comes from contradictory assumptions about free will: many of the "paradoxes" are of the form, 'assume some being already knows what you're going to do and acts accordingly; what do you do?', which of course is contradictory because you're first assuming that your actions have already been decided (they must have been in order for them to be known and predicted), and then assuming that you have control over which decision you make.

Arepo wrote:
Alan wrote: I'm not the best academic expert on real utilitarian philosophers


To be honest I suspect that we’ve made more headway on and around this forum than real academic philosophers have in the same amount of time (and possibly a lot longer); every time I read academic papers re utilitarian philosophy they’re so bound up in dealing with the sort of pseudophilosophy we just ignore that after 50 pages they end up making a point that all of us would have agreed/disagreed on after about a paragraph’s discussion. I like to think they occasionally to say things like ‘I’m not the best Dawrstian expert on negative utilitarian philosophy’ ;)



I totally agree. I've been really really unimpressed with academic philosophers; in the end I think that if there's a utilitarian intellectual revolution it's much more likely to come from a place like this forum than a university. It's kind of a weird thought, that some guys on an internet forum are the ones who understand philosophy, instead of the people that really smart people and institutions have decided are the leading experts on the subject. (But then again, I guess it's kind of weird that people aren't phased by subjecting a chicken to five weeks of agony so that their dinner tastes a bit better (let alone by the massive amount of animal suffering in the wild).)


Arepo wrote:
For example: "If you make a promise, always keep it."


The trouble is, no rule utilitarian believes this. They always allow exceptions to their rules when you probe them enough. But as far as I can tell they never offer an algorithm for when you should break a rule, just a few example cases. Presumably such a rule would be based around optimising total utility, but then unsurprisingly enough, it lapses back into global/act utilitarianism.



Yeah. I remember encountering this philosophy paper written by a Stanford philosopher that tried to tackle the "paradox" that if someone comes to your door with a gun, asks you a question, and will shoot you if you tell the truth, it's correct to lie. Her brilliant solution was that the other person wasn't a moral actor because he was doing something immoral and so you didn't have an obligation to tell them the truth. (I guess this is partially just more ranting about academic philosophers.) But in the end no one really believes the rules they construct; the rules just make them feel good and are designed to be vague enough that they can bend them without admitting to it.

Arepo wrote:
In fact, the version I usually hear is that Omega simulates your brain beforehand to see what you do.


It seems like the inherent randomness of quantum physics would make this impossible. If you simply assert that it is possible, then the problem becomes uninteresting if not nonsensical. In a universe where 1 is greater than 2, I’d obviously prefer to have 1 utilon to 2, but what do such pseudo-impossibilities have to do with doing the best we can in the world we know?



Yeah; also, in the case that he does simulate your mind and actions, your mind and actions must already have been determined, meaning that you don't actually have a choice about what to do anyway.

Arepo wrote:
I only care about emotion that is consciously perceived.


By whom? The homunculus sitting inside your brain? Why don’t utilons and antiutilons get to perceive themselves?

Your choice of decision theory is relative to the underlying rates of situations in your particular world. (This was a huge insight to me when I first realized it.)


But ‘your choice of decision theory’ is in itself a decision. How do you determine how to make it?

DanielLC wrote:If animals are less sentient, you're proportionally less likely to be an animal. The same does not apply with emotional ranges.


This still sounds the same once you disregard essential identity as a meaningful consideration; you’re basically saying an animal comprises a smaller collection of hedons and antihedons than a human does. In which case a human can presumably have more total hedons or more total antihedons, and given 1 human and 1 animal, any given hedon/antihedon among them is, a priori, more likely to be in the human.

(I agree.)

Arepo wrote:
Hutch wrote:(Are there people who don't like scalar?)


I don’t know of any utilitarians who don’t claim to have always assumed it. Non-utilitarian philosophers often seem to characterise utilitarianism in a non-scalar way (eg a standard Ethics 101 textbook is likely to have a line like ‘Utilitarians believe that an act is right iff it maximises total happiness’).



True; I guess sometimes it's worth it to make things explicit enough that non-utilitarians understand them.

Arepo wrote:
There might be an issue where lots of immoral assholes don't think about morality and when they do think that what's moral is to do whatever makes themselves happiest...


I like to call them Randroids ;)


Hehe :) . I go back and forth on what annoys me more; Randriods, or people who claim to be moral in a more selfless way, but don't actually take moral actions.

Hutch
 
Posts: 40
Joined: Sun Jun 10, 2012 9:58 am
Location: Boston

Re: The correct type of utilitarianism

Postby Hutch on 2012-06-12T14:43:00

Arepo wrote:
Agree wholeheartedly on all your criteria in principle, but this one's difficult to apply in practice. I envisage happiness in terms of hedons, where one hedon can be defined as one of two things (ideally we'd have different words for each):

A) The smallest/simplest material structure with which the material world can generate any positive experience.
B) The smallest amount of positive experience which can possibly exist in isolation given physical constraints.

In practice the two are quite similar, and might well be the same thing, though it gives me pause: B is conceptually more accurate, but whereas we might be able to scientifically pin down A, it's hard to imagine ever proving that some minimal amount of experienced happiness is greater or less than some other. Happiness itself can't communicate, and communication is about the only means we have of establishing what anything other than our current selves feels. So if we suspect that A and B can differ, it gives us a serious theoretical challenge - can we conceive of a way of establishing what B is eve given perfect technology, such that we can theoretically hope to quantify happiness?

If the answer is 'no', I don't think it's a huge *practical* challenge - we just get on with doing the best we can based on the information we have - but it's quite unsatisfying, and might make util hard to sell as a scientific principle.

A bigger problem comes when you consider anti-hedons, which are the inverse of the above (call them -A and -B). I cannot think of any way in which we could show that -A and A actually sum to 0 experienced happiness, nor does it seem obvious that they should do so. But whereas considering hedons only we can treat B as an ideal which, even if we can't isolate, we can guesstimate in many ways, it's far from clear that B and -B (the ideal case) should sum to 0 happiness either.

It's a potentially disastrous aspect of what basically seems like at the very least the most scientific ethical view possible that it seems almost arbitrary even at a theoretical level. It's hard to think of any way we could say someone who thinks -B and B sum to -B, to B, or to anything in between is actually wrong. But without any way to reach such agreement, it seems like the whole practice is massively flawed – pure negative utilitarians and pure positive utilitarians would end up thwarting each other at every turn, with everyone in between, for no other reason than that they picked some point on the scale arbitrarily, and those in between could still have huge practical disagreements.

It bears saying that preference utilitarians don’t escape this conundrum – they still require on some sort of preferilon and anti-preferilon concepts to make sense of their ideas.

I haven’t managed to think of a solution to this problem, except to suggest we just assume that B -B does = 0. Or we could assert that B -B = either B, -B or 0 (anything else being fully arbitrary), and trust that the first two are so obviously contrary to our experience that we cannot but treat them as false.

This might also help to clarify some terminology: anyone who thinks B -B <0 is a negative utilitarian in some sense, but obviously some powerfully different sense to someone who thinks B -B = -B. Maybe we should create a different term for the latter to avoid confusion – ‘terminal utilitarian’ (neater than ‘pinprick negative utilitarian’, IMO :P), or something?


After thinking about this a bit more, I have no idea whether the happiness that animals like those currently alive feel is quantized, suspect that (A) probably is continuous (though I'm not sure), and think more strongly that (B) continuous; all stated with absolutely no justification.

(Man, I suck at formatting. Anyway, back to incompetently writing a python program because I don't like any of the online todo list apps...)

Hutch
 
Posts: 40
Joined: Sun Jun 10, 2012 9:58 am
Location: Boston

Re: The correct type of utilitarianism

Postby Arepo on 2012-06-12T18:24:00

Hutch wrote:Wobegon.

(My sleepless brain is too absent to parse the question, or whether or not the question is rhetorical, so instead I opted for what I convinced myself was a clever reply.)


As rhetorical as you wanted it to be. It was a garbled reference to pop-Arthurian legend :P

Yeah, in the end the universe does not have a preference for a moral theory; in the end, scientifically, the world "should" is meaningless.


Except that the 'universe doesn't have a preference' claim doesn't obviously imply the 'no answer is more correct' conclusion. We can say that rights are irrelevant because any claim about their existence that invokes the material world is clearly false, for eg. So I don't want to totally give up on the possibility of the sum of B -B having some kind of 'right' answer, since we can imagine getting to a total impasse if it doesn't, based on nothing but our inconstant prejudices.

I totally agree. I've been really really unimpressed with academic philosophers; in the end I think that if there's a utilitarian intellectual revolution it's much more likely to come from a place like this forum than a university. It's kind of a weird thought, that some guys on an internet forum are the ones who understand philosophy, instead of the people that really smart people and institutions have decided are the leading experts on the subject.


I don't think it's a universal rule. I think Alastair Norcross is very good, as pure theorists go, and Toby Ord has done a lot for practical ethics. Are you familiar with these guys? Come to think of it, can I persuade you to start a proper intro thread in the top forum, so we have some idea which topics you'll be familiar with?

Yeah. I remember encountering this philosophy paper written by a Stanford philosopher that tried to tackle the "paradox" that if someone comes to your door with a gun, asks you a question, and will shoot you if you tell the truth, it's correct to lie. Her brilliant solution was that the other person wasn't a moral actor because he was doing something immoral and so you didn't have an obligation to tell them the truth.


There's a lot of invoking of - let's say - extra-material principles outside utilitarianism (less so within, thankfully). Agency, responsibility, justice, causality etc.

After thinking about this a bit more, I have no idea whether the happiness that animals like those currently alive feel is quantized, suspect that (A) probably is continuous (though I'm not sure), and think more strongly that (B) continuous; all stated with absolutely no justification.


By 'continuous' I'm guessing you mean non-atomic? If so I don't really agree about A - it seems that to think there's no smallest physical vessel for consciousness you have to believe in some sort of universal consciousness, which I don't (I don't completely reject the idea, it just seems less elegant than the alternatives), otherwise obviously you need some number of fundamental particles arranged in some particular way for it to emerge. B I don't really have a view on, on much the same reasoning as above; I don't know how to infer anything at all about it, given that it can't communicate with me.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: The correct type of utilitarianism

Postby Brian Tomasik on 2012-06-13T14:38:00

Great conversation, Hutch. Here's a partial reply. I hope to return to the rest eventually...

Hutch wrote:Yeah, I think that's right; in the end it's sometimes hard to quantify utility.

Cool -- I'm glad we've managed to communicate successfully here.

Hutch wrote:There might be an issue where lots of immoral assholes don't think about morality and when they do think that what's moral is to do whatever makes themselves happiest...

Yes. Psychopaths are an extreme illustration. Thinking about this accentuates my already-strong intuition that no one really "deserves" punishment, although punishment may be a necessary evil in some cases.

Hutch wrote:If they "know it" in that they're pretty sure because they're good at reading facial expressions or something like that, then you should just do whatever it takes to make your facial expression that of a rule utilitarian, even if that means convincing yourself that you are one.

Unless you're a master of temporary self-delusion, in order to convince yourself that you are one, you need to actually be one.

Hutch wrote:If "knowing it" means something like "they have lots of previous data", then it doesn't matter what you do with your facial expressions--he's already decided what he's going to do, so you might as well take both boxes. (Unless some of that previous data is about you, in which case, according to act utilitarianism, you should have lived your life so as to convince people you're a rule utilitarian, even if that means convincing yourself of it.)

Agree.

Hutch wrote:But if it's neither of those things--if he absolutely knows what you're going to do

Yes, this is the main scenario for Newcomb. Omega simulates an atom-for-atom copy of your brain in a virtual world and observes what choice it makes. This is necessarily the same choice that your brain will make in the real case, because it's atom-for-atom identical. It's the exact same program running with the exact same inputs.

Hutch wrote:The first is that you're treating the universe as deterministic

Yes, I think it probably is.

Hutch wrote:in which case *the question of what to do or what to believe is irrelevant because your actions are already determined anyway*.

Here we disagree. :) In fact, I used to believe the same, but I changed my mind: See the top part of "If Free Will Were Coherent, We Ought to Believe in It."

What do you find objectionable about compatibilism?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: The correct type of utilitarianism

Postby DanielLC on 2012-06-13T21:52:00

I have three problems with punishment:
[*]It's self-referential. Suppose Alice and Bob constantly hurt each other. Is Alice evil because she hurts Bob who is good because he hurts Alice, or vice versa?
[*]If you get around that (by defining guilt separately from utility), then you end up with the idea that a universe full of evil people being tortured is a utopia.
[*]I see no reason to believe that pain feels different if you're evil, and I find the idea of an ethical system based on anything else to be bizarre.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: The correct type of utilitarianism

Postby Brian Tomasik on 2012-06-15T09:48:00

Arepo wrote:
What if you've already let him on that you're an act-utilitarian by accident? Or what if he asks you directly if you're really going to keep your promise, so that if you answer you'll have to lie, and he'll see it?

Point is, being an act-utilitarian can have bad side-effects in some situations, and in possible worlds where these side-effects happen enough, you should stop being one altogether.

I don’t understand what activity this reasoning is supposed to persuade anyone to take. Is it that one just shouldn’t ‘be an act utilititarian’ just in case they hit situations like this? Is it that they can be an act utilitarian but if they do hit situations like this they should instantly switch to being a promise-keeper? (isn’t that still making the act-utilitarian decision) Is it same but they should switch iff they know the driver can read them near-perfectly?

Any of those would help, especially #2 and #3. Abandoning act utilitarianism entirely is a drastic step and should only be done if these situations happen enough. But I do think following the general rule of "always keep your promises" may not be a bad idea. If you can instantaneously turn yourself into a promise-keeper at the right times, that's great.

A more difficult case is something like counterfactual mugging. Is it good to become the kind of person who takes that deal? I think it likely is, although it depends on how frequent these kinds of scenarios are. Let's assume that Omega is completely trustworthy for the thought experiment; this is not Pascal's mugging.

Arepo wrote:To be honest I suspect that we’ve made more headway on and around this forum than real academic philosophers have in the same amount of time (and possibly a lot longer)

Quite possibly. Academics have different incentives, and as you say, those incentives often lead to arguments about things that are obvious to us.

Arepo wrote:
For example: "If you make a promise, always keep it."

The trouble is, no rule utilitarian believes this. They always allow exceptions to their rules when you probe them enough.

I would seriously consider following it. Can you think of a counter-example? The key here is if you make a promise. This means you should make solid promises very sparingly.

Arepo wrote:But as far as I can tell they never offer an algorithm for when you should break a rule, just a few example cases. Presumably such a rule would be based around optimising total utility, but then unsurprisingly enough, it lapses back into global/act utilitarianism.

Yes, of course. The only reason to ever be a rule utilitarian (or an act utilitarian) is because of global utilitarianism.

Arepo wrote:
In fact, the version I usually hear is that Omega simulates your brain beforehand to see what you do.

It seems like the inherent randomness of quantum physics would make this impossible.

I've generally heard that quantum "randomness" isn't sufficient to affect the brain at the level of neurons, synapses, and neurotransmitters. Maybe you would say that the randomness could have butterfly effects, such that it would ultimately change which neurons fire. That might be, but I think the property of a brain as to whether it will one-box or two-box is sufficiently high-level and static that this is unlikely to matter. The same person will tend to make the same decision about one-boxing or two-boxing from one day to the next -- even one year to the next -- even though the atoms in her brain have totally rearranged themselves by far more than quantum fluctuations in the intervening period.

Even if you don't buy that -- and I think you should :) -- consider the following twist on the problem. You are told by a completely trustworthy Omega that your brain is in fact digital and contains error-correcting codes to guard against oddities introduced by quantum fluctuations. Omega has your source code and has run another copy of you to see if you pick one box or two. This scenario is sensical, no?

Arepo wrote:
I only care about emotion that is consciously perceived.

By whom? The homunculus sitting inside your brain? Why don’t utilons and antiutilons get to perceive themselves?

Perceived by the person who has the emotions, roughly along the lines that people mean when they say they were "conscious of" some experience. I happen not to care about the utilons that didn't bubble up to consciousness. For example, most people don't care about nociception that occurs when they're under general anesthesia.

This is a value choice on my part that could change with convincing thought experiments otherwise. For example, some have said that "even though you don't care about unconscious utilons, that doesn't mean they don't matter, because lots of people don't care about even the conscious utilons of other organisms, and yet those still matter." This is a decent intuition pump, but my reply is just that I care about any utilons that matter to their own conscious mind. There isn't any "conscious person" who cares about my unconscious utilons. (Well, maybe you do, but not in the visceral sense that I'm talking about here.)

Arepo wrote:But ‘your choice of decision theory’ is in itself a decision. How do you determine how to make it?

Presumably global utilitarianism. You have to start somewhere, and that seems intuitively the best way to try to begin.

Arepo wrote:This still sounds the same once you disregard essential identity as a meaningful consideration; you’re basically saying an animal comprises a smaller collection of hedons and antihedons than a human does. In which case a human can presumably have more total hedons or more total antihedons, and given 1 human and 1 animal, any given hedon/antihedon among them is, a priori, more likely to be in the human.

Not quite. Suppose the universe consists of two people, Alice and Bob. Alice lives for 2 days at 1 util per day. Bob lives for 1 day at 100 utils per day. You're still twice as likely to find yourself being Alice.

Anthropic self-sampling is over experience-moments, not experience-intensities.

Hutch wrote:Wobegon.

I guess this is why you're above average. :)

Hutch wrote:a lot of the appearance of meaningfulness of decision theories comes from contradictory assumptions about free will: many of the "paradoxes" are of the form, 'assume some being already knows what you're going to do and acts accordingly; what do you do?', which of course is contradictory because you're first assuming that your actions have already been decided (they must have been in order for them to be known and predicted), and then assuming that you have control over which decision you make.

I used to think this was problematic, but I don't any more. Consider this: Decision theory is relevant to artificially intelligent computer programs, yes? But AI programs are clearly deterministic.

Hutch wrote:But then again, I guess it's kind of weird that people aren't phased by subjecting a chicken to five weeks of agony so that their dinner tastes a bit better (let alone by the massive amount of animal suffering in the wild).

:)

Hutch wrote:I remember encountering this philosophy paper written by a Stanford philosopher that tried to tackle the "paradox" that if someone comes to your door with a gun, asks you a question, and will shoot you if you tell the truth, it's correct to lie.

I think lying is a different case from keeping promises. I agree that you should lie here, just as you should lie to the Nazis who ask if you're hiding Jews in your attic.

Perhaps a similar example where you shouldn't keep your promises is if you make a deal with someone and then that person tricks you or fails to uphold the bargain. But in that case, it's not really breaking your promise, because the contract has already been violated.


DanielLC wrote:[*]It's self-referential. Suppose Alice and Bob constantly hurt each other. Is Alice evil because she hurts Bob who is good because he hurts Alice, or vice versa?
[*]If you get around that (by defining guilt separately from utility), then you end up with the idea that a universe full of evil people being tortured is a utopia.
[*]I see no reason to believe that pain feels different if you're evil, and I find the idea of an ethical system based on anything else to be bizarre.

Heh, you are a true utilitarian, DanielLC. These questions are not so mysterious to people who have other ways of thinking about the matter. :)

Here's how I believe ordinary people would answer. Note: I don't endorse these statements!
  • Both Alice and Bob are evil because they hurt each other. Punishment is not "good" unless it's done by a legitimate authority for the right reasons.
  • Not a utopia -- just "the right thing to do." The right thing could involve lots of pain and suffering, so the religious fundamentalists claim.
  • No, pain doesn't feel different if you're evil. But how pain feels isn't the issue at hand.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: The correct type of utilitarianism

Postby Arepo on 2012-06-15T13:59:00

Alan Dawrst wrote:A more difficult case is something like counterfactual mugging. Is it good to become the kind of person who takes that deal? I think it likely is, although it depends on how frequent these kinds of scenarios are. Let's assume that Omega is completely trustworthy for the thought experiment; this is not Pascal's mugging.


As with Newcomb’s paradox, I just think the situation is poorly defined. I don’t believe anyone is completely trustworthy, and I’m certainly not willing to assume it without being given specific evidence to show why I might believe it. And as with the hitchhiker, my benefit, such as it is depends on me having already persuaded the mugger that I’d accept his offer, not actually *being* the type of person who’d accept it now with no expectation of subsequent gain. Obviously if I have an expectation that taking the deal now will benefit me in future for whatever reason, I have a non-RU reason to accept.

I would seriously consider following it. Can you think of a counter-example? The key here is if you make a promise. This means you should make solid promises very sparingly.


It sounds like you should give the example, since you could always claim I’m not being restrictive enough. Sure, if you make promises sparingly enough you might never have to break them, but that might entail never making any promises at all, or only so rarely that they have no real world import. What’s the most frequent/likely type of promise you might suggest one makes as an RU?

Yes, of course. The only reason to ever be a rule utilitarian (or an act utilitarian) is because of global utilitarianism.


As I said above, I don’t really think of AU and GU as different; I think the idea that they differ comes primarily from uncharitable reading of early utils by subsequent RUs.

I've generally heard that quantum "randomness" isn't sufficient to affect the brain at the level of neurons, synapses, and neurotransmitters. Maybe you would say that the randomness could have butterfly effects, such that it would ultimately change which neurons fire. That might be, but I think the property of a brain as to whether it will one-box or two-box is sufficiently high-level and static that this is unlikely to matter. The same person will tend to make the same decision about one-boxing or two-boxing from one day to the next -- even one year to the next -- even though the atoms in her brain have totally rearranged themselves by far more than quantum fluctuations in the intervening period.

Even if you don't buy that -- and I think you should :) -- consider the following twist on the problem. You are told by a completely trustworthy Omega that your brain is in fact digital and contains error-correcting codes to guard against oddities introduced by quantum fluctuations. Omega has your source code and has run another copy of you to see if you pick one box or two. This scenario is sensical, no?


Ok, I believe the first sentence of the first paragraph (or at least I’m willing to presume it), but I still say no to the question – I’m still expected to trust Omega with no explanation of why. The more plausible version I’ve heard is where you imagine a gifted real world psychologist, who has a more plausible predictive success rate of something like 70% (and offers a favourable enough ratio that if you’re reasonably confident of that probability it doesn’t make sense to two-box on simple expectation alone), but now it seems much clearer that what matters is how I’ve conducted myself up to this point rather than how I conduct myself right now. If we had any serious belief that such a form of NP would ever actually occur, our best strategy would surely be to claim to be a one-boxer in all public venues, then take both. Maybe the difference between ‘one-boxers’ and ‘two-boxers’ is really just their belief in Omega ;)

If I happen to have ended up in a room with such a guy, and he’s rumbled me (or I didn’t follow the prevarication strategy), then that’s just tough. People who maximise their expected utility according to empirical evidence will miss out on the benefits of wishing wells that actually work, or fairy godmothers that turn you into a princess iff you stay at home and minimise your utility by slaving for your ugly stepsisters. They’ll also miss out when Omega shows up.

Also, it occurs to me now (and irritates me that it hadn’t before) that Omega’s assertion that he’s ‘perfectly simulated me’ is impossible. In his original simulation, did he factor in himself telling me that he’s simulated me while knowing what the outcome of the simulation was, as he’s doing now? If not, the situation is different. If so, it can’t have been the original simulation, since in the original simulation (by definition) he had not yet simulated me to know what the outcome was.

Perceived by the person who has the emotions, roughly along the lines that people mean when they say they were "conscious of" some experience. I happen not to care about the utilons that didn't bubble up to consciousness. For example, most people don't care about nociception that occurs when they're under general anesthesia.


That’s fine. It still seems totally conceivable that at each waking moment you’re experiencing some combination of positive and negative experiences that you can’t precisely identify, as with so many other of our sensory functions. Can you tell me exactly how many hedons you’re experiencing right now (for any definition of hedon you like)? If not, I don’t see how you can claim to know that you aren’t experiencing a single antihedon.

It sounds like you ‘a mind’ is a much more coherent unit than I do. Conceptually I think we both agree that it’s basically a collection of hedons and antihedons, but maybe I think of it in that way more literally than you do?

Presumably global utilitarianism. You have to start somewhere, and that seems intuitively the best way to try to begin.


But GU is either solely a theory of value in which case it can’t in itself tell you how to make such a decision as ‘which decision theory to pick’ or it’s a decision theory, in which case you’ve already picked one. It seems like this logic will recurse as far as you want to push it, so you can’t escape this problem. I claim I can, by denying a difference between ‘theory of value’ and ‘decision theory’, or at least the sort of difference that means one does not entail the other. I just seek (subject to irrationalty) to maximise utility.

Arepo wrote:This still sounds the same once you disregard essential identity as a meaningful consideration; you’re basically saying an animal comprises a smaller collection of hedons and antihedons than a human does. In which case a human can presumably have more total hedons or more total antihedons, and given 1 human and 1 animal, any given hedon/antihedon among them is, a priori, more likely to be in the human.

Not quite. Suppose the universe consists of two people, Alice and Bob. Alice lives for 2 days at 1 util per day. Bob lives for 1 day at 100 utils per day. You're still twice as likely to find yourself being Alice.

Anthropic self-sampling is over experience-moments, not experience-intensities.


I think this is begging the question. The phrase ‘I am likely to be Bob’ is incoherent if there is no essential ‘I’; ‘Bob’ is just the name we’ve given to the collection of utils that we’re semiarbitrarily recognising as him. I feel like I’m taking your position in the tableness discussion here. Bobness is tableness.

In other words, I just don’t see reason to distinguish between experience-moments and experience-intensities the way you want to. It seems like an unnecessary distinction, subject to Occam’s Razor.


I used to think this was problematic, but I don't any more. Consider this: Decision theory is relevant to artificially intelligent computer programs, yes? But AI programs are clearly deterministic.


You’re asking us to endorse a statement that ‘[thing we think is too poorly defined to be relevant to the real world] is clearly relevant to the real world in situation x’. I have a problem with this :P Try this: 'the outcome of algorithms is relevant to us when we run computer programs iff we have an interest in one outcome over another, and insofar as we do have such an interest, it’s the expression underlying value/motivational system’. I don’t see a need for something called ‘decision theory’ in that statement, if its remit is something other than ‘knowing what variable to maximise’ and ‘knowing how to maximise that variable’. If it’s one of those things, then it already has a name.

I think lying is a different case from keeping promises. I agree that you should lie here, just as you should lie to the Nazis who ask if you're hiding Jews in your attic.


What if they’re satisfied with nothing less than a promise that you’re telling the truth from you? (if they’re unsatisfied, they shoot you/check your attic, respective to the example – maybe both, possibly in both ;))
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: The correct type of utilitarianism

Postby Brian Tomasik on 2012-06-16T12:23:00

Arepo wrote:It sounds like you should give the example, since you could always claim I’m not being restrictive enough.

Well, I haven't thought of a good one yet. :) One that comes close is the scenario in which you're engaged in a nuclear arms race, and you've threatened to blow up the other side if they attack first. Now they have attacked first. Do you still blow up the other side and kill millions more people? I really don't know. (Dr. Strangelove's doomsday machine is relevant here.)

Arepo wrote:What’s the most frequent/likely type of promise you might suggest one makes as an RU?

Lots of small things, like "I promise to return this book to you" or "I promise to pay you back for dinner." I suppose that by social convention, these things include implicit disclaimers that the promise will be voided if, say, the person to whom you were going to return the book vanishes mysteriously. You haven't committed yourself to spending your entire life on a search mission to find the person just so you can return the book.

What I'm basically trying to suggest is that fitting in with social conventions about proper behavior can be important, even when the specific instance of proper behavior may be suboptimal when considered in isolation. We don't disagree on this point -- we're just using slightly different words.

Arepo wrote:If we had any serious belief that such a form of NP would ever actually occur, our best strategy would surely be to claim to be a one-boxer in all public venues, then take both.

Such as on a public forum like this one? Oops. :) But you don't have "any serious belief that such a form of NP would ever actually occur," so you're okay.

Arepo wrote:Also, it occurs to me now (and irritates me that it hadn’t before) that Omega’s assertion that he’s ‘perfectly simulated me’ is impossible.

Yes, this is a great point. I was actually thinking about it earlier today. :) I need to ponder more about how the Newcomb-ers would reply.

Arepo wrote:If not, I don’t see how you can claim to know that you aren’t experiencing a single antihedon.

I probably am experiencing some antihedons, though I'm not sure. In any event, I'm not defending the NU position, so I'm not phased either way.

Arepo wrote:It sounds like you ‘a mind’ is a much more coherent unit than I do. Conceptually I think we both agree that it’s basically a collection of hedons and antihedons, but maybe I think of it in that way more literally than you do?

I agree that a mind can be mildly non-coherent. Consciousness is not a single, centralized Cartesian theater.

But yes, I don't take hedons/antihedons as literally as you.

Arepo wrote:But GU is either solely a theory of value in which case it can’t in itself tell you how to make such a decision as ‘which decision theory to pick’ or it’s a decision theory, in which case you’ve already picked one.

I meant the latter -- regarding it as a decision theory.

Arepo wrote:In other words, I just don’t see reason to distinguish between experience-moments and experience-intensities the way you want to. It seems like an unnecessary distinction, subject to Occam’s Razor.

There are certainly experiences that are not very emotional but that last a long time. I'm at a loss for what more to say...

Arepo wrote:What if they’re satisfied with nothing less than a promise that you’re telling the truth from you?

When I use the word "promise," I mean a mutually voluntary contract into which two parties enter.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: The correct type of utilitarianism

Postby DanielLC on 2012-06-17T00:09:00

When I use the word "promise," I mean a mutually voluntary contract into which two parties enter.


Suppose someone sells you a house, and you promise to pay the cost over a period of time. Can you then back out of it, on the basis that since you'd be homeless if you didn't enter the contract, it wasn't voluntary?
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: The correct type of utilitarianism

Postby Brian Tomasik on 2012-06-17T11:05:00

DanielLC wrote:Suppose someone sells you a house, and you promise to pay the cost over a period of time. Can you then back out of it, on the basis that since you'd be homeless if you didn't enter the contract, it wasn't voluntary?

No, it was still voluntary at the time it was entered into, so in that regard you shouldn't back out.

That said, society has an established culture that permits bankruptcy, so it should be implied in an agreement like the one you suggest that the purchaser of the house may resort to desperate measures to prevent her kids from starving. In a society that didn't allow bankruptcy, I would definitely want some escape clauses before I signed the contract!
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: The correct type of utilitarianism

Postby Hutch on 2012-06-17T17:20:00

Alan Dawrst wrote:Great conversation, Hutch. Here's a partial reply. I hope to return to the rest eventually...

Hutch wrote:Yeah, I think that's right; in the end it's sometimes hard to quantify utility.

Cool -- I'm glad we've managed to communicate successfully here.

Hutch wrote:There might be an issue where lots of immoral assholes don't think about morality and when they do think that what's moral is to do whatever makes themselves happiest...

Yes. Psychopaths are an extreme illustration. Thinking about this accentuates my already-strong intuition that no one really "deserves" punishment, although punishment may be a necessary evil in some cases.

Hutch wrote:If they "know it" in that they're pretty sure because they're good at reading facial expressions or something like that, then you should just do whatever it takes to make your facial expression that of a rule utilitarian, even if that means convincing yourself that you are one.

Unless you're a master of temporary self-delusion, in order to convince yourself that you are one, you need to actually be one.

Hutch wrote:If "knowing it" means something like "they have lots of previous data", then it doesn't matter what you do with your facial expressions--he's already decided what he's going to do, so you might as well take both boxes. (Unless some of that previous data is about you, in which case, according to act utilitarianism, you should have lived your life so as to convince people you're a rule utilitarian, even if that means convincing yourself of it.)

Agree.

Hutch wrote:But if it's neither of those things--if he absolutely knows what you're going to do

Yes, this is the main scenario for Newcomb. Omega simulates an atom-for-atom copy of your brain in a virtual world and observes what choice it makes. This is necessarily the same choice that your brain will make in the real case, because it's atom-for-atom identical. It's the exact same program running with the exact same inputs.

Hutch wrote:The first is that you're treating the universe as deterministic

Yes, I think it probably is.

Hutch wrote:in which case *the question of what to do or what to believe is irrelevant because your actions are already determined anyway*.

Here we disagree. :) In fact, I used to believe the same, but I changed my mind: See the top part of "If Free Will Were Coherent, We Ought to Believe in It."

What do you find objectionable about compatibilism?



So, first of all I don't buy that there is any (finite) chance of free will existing. But let's say I did. Then what you're saying is that you're best off assuming that there is free will. I kind of buy it, with the caveat that free will isn't well enough defined for that statement to be, but let's take that as the truth for now. What you're saying now is the following:

1) The super smart guy can predict what you're doing absolutely.
2) You might have free will, so you might as well assume you do.
3) So you should pick one box, because if there is free will then he'll put the money in it, and if not you didn't have a choice anyway.

The thing is, *statements 1 and 2 are contradictory*. Even if you think both have some chance of being true there is no possible universe in which both are true. And only if both are true does it make sense to take one box.

Hutch
 
Posts: 40
Joined: Sun Jun 10, 2012 9:58 am
Location: Boston

Re: The correct type of utilitarianism

Postby Brian Tomasik on 2012-06-18T04:11:00

Hutch wrote:
Alan Dawrst wrote:See the top part of "If Free Will Were Coherent, We Ought to Believe in It."

So, first of all I don't buy that there is any (finite) chance of free will existing. But let's say I did. Then what you're saying is that you're best off assuming that there is free will.

Sorry -- I should have known that would be confusing. :)

What I meant to say was that I used to think that if free will was false then our actions wouldn't matter, but I don't anymore, and I tried to refer to the top section of my old essay where I discuss my change of views. The bottom section of that essay is what I used to think, but it's not what I think anymore.

Hutch wrote:free will isn't well enough defined for that statement to be

Yes, this is my reply to AlanDawrst2007 who wrote the original piece.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: The correct type of utilitarianism

Postby Arepo on 2012-06-18T10:56:00

Alan Dawrst wrote:Well, I haven't thought of a good one yet. :) One that comes close is the scenario in which you're engaged in a nuclear arms race, and you've threatened to blow up the other side if they attack first. Now they have attacked first. Do you still blow up the other side and kill millions more people?


In a game of chicken that I think it's sufficiently important to win, I follow the 'pull out my steering wheel and brandish it at the other driver' strategy. If they beat me to it, I don't follow the strategy. So while I might build a doomsday device if I were persuaded nuclear deterrence were a sound strategy, had I any way of averting a counterstrike once they launched, it would seem obviously sensible to take it.

Lots of small things, like "I promise to return this book to you" or "I promise to pay you back for dinner." I suppose that by social convention, these things include implicit disclaimers that the promise will be voided if, say, the person to whom you were going to return the book vanishes mysteriously.


I think you're cheating - you're trying to give an example of an unbreakable promise, with the caveat that you might sometimes break it if conditions that you don't define but 'will know when you see' are met.

What I'm basically trying to suggest is that fitting in with social conventions about proper behavior can be important, even when the specific instance of proper behavior may be suboptimal when considered in isolation. We don't disagree on this point -- we're just using slightly different words.


Sure.

Arepo wrote:If not, I don’t see how you can claim to know that you aren’t experiencing a single antihedon.

I probably am experiencing some antihedons, though I'm not sure. In any event, I'm not defending the NU position, so I'm not phased either way.


Ok. I think NU really suffers (no pun intended) from this challenge though.

Arepo wrote:]
But GU is either solely a theory of value ... or it’s a decision theory, in which case you’ve already picked one.

I meant the latter -- regarding it as a decision theory.


So why do you need another?

There are certainly experiences that are not very emotional but that last a long time. I'm at a loss for what more to say...


A discussion for a new thread on a day when I have more free time? :P

Arepo wrote:What if they’re satisfied with nothing less than a promise that you’re telling the truth from you?

When I use the word "promise," I mean a mutually voluntary contract into which two parties enter.
[/quote]

What fundamentally distinguishes an action you voluntarily take because you have an aversion to being shot from an action you voluntarily take because you have an aversion to being hungry/not having read a book etc? They sound ultimately equivalent to me, before one invokes dubious metaphysics.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: The correct type of utilitarianism

Postby Brian Tomasik on 2012-06-18T14:46:00

Arepo wrote:I think you're cheating - you're trying to give an example of an unbreakable promise, with the caveat that you might sometimes break it if conditions that you don't define but 'will know when you see' are met.

Heh, yeah, I suppose. I think following social conventions is a pretty good guideline much of the time, because (a) those conventions often evolved for moderately good reasons, and (b) those conditions are the ones to which people will hold you responsible when they judge your character.

Arepo wrote:So why do you need another?

Global utilitarianism might not be specific enough to advise on particular cases. But I haven't studied global utilitarianism (only read a few pages of Toby's thesis), so I can't really comment further.

Arepo wrote:What fundamentally distinguishes an action you voluntarily take because you have an aversion to being shot from an action you voluntarily take because you have an aversion to being hungry/not having read a book etc?

Ha, that's a very nice point. I don't have a good answer except again to rely on social convention ("you know it when you see it").

I don't have strongly developed views in this area, so I'm definitely open to changing my definitions and positions in response to arguments like the one you just made.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: The correct type of utilitarianism

Postby Pat on 2012-10-25T20:04:00

Off-topic and a few months late:
Hutch wrote:(Man, I suck at formatting. Anyway, back to incompetently writing a python program because I don't like any of the online todo list apps...)

A comment on an article critical of to-do lists reminded me of Hutch's post:
We have reached the TODO list app event horizon, where it actually takes longer to evaluate all the existing TODO list apps than it does to write your own TODO list app.

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska


Return to General discussion