Average vs Total

Whether it's pushpin, poetry or neither, you can discuss it here.

Average vs Total

Postby Lord Tonberry on 2008-11-03T22:33:00

Hi first post, I'd to ask where other utilitarians stand on the Average Utilitarianism vs Total Utilitarianism debate.

Is it better to work for the average well-being or the total sum well-being?

I think most would lean to the average as it's the emotionally 'right' answer as total utilitarianism would lead to what many would consider unsavoury ethics. Most famous is the 'repugnant conclusion'. Maybe instead of trying to improve the lives of citizens governments should just simply encourage them to reproduce, possible even mandate it? Also despite the most famous utilitarians being against meat-eating it could be argued from a TU view that if there wasn't a meat-industry those animals would have never been born.

Despite those ethical dilemmas I'm a total utilitarian (and indeed a vegetarian). I don't understand what's so special about an individual being happy as opposed to that happiness being spread across many individuals. Many of the arguments for AU just strike me as similar to those arguments against utilitarianism in general - they appeal to the gut . To me the conclusion of AU, where it would be better to have one extremely happy person alive as opposed to many simply very happy people, is far more repugnant.
User avatar
Lord Tonberry
 
Posts: 1
Joined: Mon Nov 03, 2008 9:02 pm
Location: Leicester, UK

Re: Average vs Total

Postby Arepo on 2008-11-03T23:17:00

I'd be surprised if you found more average than total utils here. Average just gives too many obviously counter-productive (rather than counter-intuitive, though probably that too) conclusions. If 5 joyful people of happiness 50 are living with one pleasantly content person of happiness 10 (average 43.3 happiness), then they should murder him even if a) he wants to continue living and b) they all like him, so that killing him permanently reduces their happiness by five apiece, because then the average raises to 45.

Everyone loses, and yet AU has it that this is a good thing.

I'm not 100% convinced that TU is the only alternative to AU yet, and I still dislike the repugnant conclusion, which we've discussed briefly in this thread. I agree with everything Ryan said, but my objection isn't intuitive (well, I'm being slightly disingenuous - I do find the conclusion repugnant, but I feel that the logic of my objection is unrelated to that intuition). Rather, I can't parse the claim that (happy) existence is superior in any useful sense to nonexistence. This might be unusual, but I honestly feel that I'd have no serious qualms about someone erasing me from history if doing so didn't decrease the net happiness of the world (though I'd like to think it would...).

So since the 'better' comparison doesn't even seen to work introspectively, I can't easily come to terms with the idea that we can apply it to each other.

Still, I haven't found a coherent schema that rejects TU yet, so eventually I might just have to give up and assume it's accurate. For now, let's say I'm sympathetic to it in theory, but wary of it in practice.

As you say, TUs can (and I think Toby Ord does) claim that at least in theory a livestock industry is a good thing for exactly the reason you gave. I imagine Toby's strongly against any such industry that doesn't clearly give the animals in question good lives, though. I'm pretty sure he's also vegetarian (maybe even vegan?).

I'm unconvinced anyway - for one thing, if livestock industries were abolished, the land could (in most case) be used to grow vegetables/cereals etc for direct consumption by humans. If we don't assume that humanity is at least potentially happier under the right governments than... bovinity... then conclusions about livestock farming will be the least of our concerns.

But overpopulation worries me more - it seems to be a major contributor to global suffering and to risks of global catastrophe, but it's hard to say at what point the suffering increase actually would supersede/has superseded the happiness increase. I feel like we should at least take a precautionary stance and say we shouldn't worry about increasing population unless we're very confident the good will outweigh the suffering, but it's not easy to justify that view. The best I can think of is that in the current world, suffering seems to be a) much easier to generate, b) much harder to eliminate, and c) much more acute than happiness. Maybe that's a stronger argument than it feels like...

Anyway, welcome to the forum :) Please write an intro thread, if you feel so inclined.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Average vs Total

Postby RyanCarey on 2008-11-04T09:53:00

Hi Lord Tonberry,
I warmly welcome you to Felicifia! There aren't any forums with its utilitarian focus online, which is why it's so important that we give Felicifia.org a good chance.

Regarding the Average vs Total question, I regard myself as a total utilitarian with a fairly high level of confidence, just like I regard myself as a classical utilitarian. For me, it's about wellbeing. Anything that can feel good or bad, I'll to include in my calculations. I believe there's such a thing as negative utility because It's possible to have negative feelings. So the repugnant conclusion is ethically undesirable when you create a large number of people with negative happiness. Discussing why people find that conclusion so repugnant is a fun psychological exercise.

If you want a real puzzling thought experiment, what happens when the neurons connecting the two halves of a person's brain are split? Must a total utilitarian admit that the world has jsut become a better place? Or, conversely, must an average utilitarian admit that the number of minds in the world has increased by one?
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Average vs Total

Postby rob on 2008-11-09T19:04:00

I think any reasonable calculation has to consider that killing someone (who wants to avoid death) counts as an "unhappiness event". Just because they are no longer around to be happy or unhappy doesn't mean that we should just drop them off the equation. Which would you prefer, an relatively unhappy life, or death? If you say they are about equal, then we've got something to plug into our equation.

How you count those who don't yet exist (would making more people cause greater happiness?) just gets kind of silly.

(btw, this is my first post, at some point I'll properly introduce myself :) )

rob
 
Posts: 20
Joined: Sun Nov 09, 2008 5:29 pm
Location: San Francisco

Re: Average vs Total

Postby Arepo on 2008-11-10T23:08:00

rob wrote:I think any reasonable calculation has to consider that killing someone (who wants to avoid death) counts as an "unhappiness event". Just because they are no longer around to be happy or unhappy doesn't mean that we should just drop them off the equation. Which would you prefer, an relatively unhappy life, or death? If you say they are about equal, then we've got something to plug into our equation.

How you count those who don't yet exist (would making more people cause greater happiness?) just gets kind of silly.

(btw, this is my first post, at some point I'll properly introduce myself :) )


Welcome to the forum, rob :)

The total/average/person affecting question is pretty abstract, but I don't think we can afford to call it silly if we hope to rigourously justify a consequentialist ethic. Any suggestion that we should increase a variable (as far as I can see) makes no sense unless we know what we're increasing it relative to. So if we try to increase happiness relative to the average, sometimes we can succeed by eliminating a happy agent. If we try to increase the total, we need to (I think) have some sort of baseline measure.

And if we deny a total sum of any kind, and only try to increase each person's welfare as much as possible, we end up with a really counterproductive conclusion, that we can substitute the birth of a really happy person with the birth of a really miserable person without having done anything contrary to our objective.

Ugh, I'm describing this atrociously... The basic point is that if you follow util to its logical conclusions, rather than treating it as more of a metaethical justification for following conventional/common sense ethics, these points are all more relevant to everyday life than you might think.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Average vs Total

Postby RyanCarey on 2008-11-10T23:35:00

yeah, welcome to the Forum Rob! You're in good company (with Peter Singer), in considering that a death means a decrease in utility.
We'll all agree that death tends to involve suffering and, furthermore, it's bad for the family and the rest of the world to be deprived of this dying individual.

By the way, Arepo is right in saying that this average vs total debate can have practical implications. Take abortion for example. Or meat-eating.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Average vs Total

Postby rob on 2008-11-11T03:41:00

RyanCarey wrote:We'll all agree that death tends to involve suffering and, furthermore, it's bad for the family and the rest of the world to be deprived of this dying individual.

I would certainly hope you don't consider that suffering prior to death, or the sadness of those that miss the deceased, is the only thing that makes someone's death "bad". That would sort of go against common sense.

Say you've got an isolated village of people happily living their lives. No one else in the world knows of these people or will miss them if they are gone. Does that mean it is ok to painlessly murder all of them at once, with a shot to the head, while they are sleeping? No one will miss them, and no one will be made unhappy or suffer.

Only someone out of their mind, or supremely evil, would think so. (in my ever so humble opinion! ;) )

Unhappiness, in general, is our brain sensing that something which we are "programmed" to avoid has happened. Examples: pain is often caused by trauma to ones body, which we are predisposed to try to avoid. We also generally try to avoid the deaths of our offspring -- if it happens, it causes us unhappiness. We also try to avoid being too hot or too cold, or not having food or water or air. We try to avoid being without a sex partner. All of these make us unhappy, because we want to avoid them, for fairly obvious Darwinian reasons...each of them decreases the chances of our genes in future generations. Even those things that make us unhappy while not having an obvious Darwinian disadvantage can probably be traced to being a byproduct thereof...

We also want to avoid death (usually). Death may not make us unhappy per se (since we won't be around to *experience* said unhappiness), but by being something we want to avoid, it equates to unhappiness. Common sense tells us the same thing, the only time we miss this point is when we overanalyze it, and do so overly simplistically. Which is what can scare me about utilitarianism...there is lots of potential for that sort of "missing of common sense."

I suppose I see your point on abortion and animal rights. I would argue that reasonable people would see that there is a gray scale there....hurting a fly is "less wrong" than hurting, say, a dog. Killing a fetus soon after conception is less wrong than killing a teenager.

Some of this gets into some really tricky areas. Is it wrong to hurt a very realistic robot pet? I think we may find that our natural sense of right and wrong extends to those that we naturally feel empathy toward, whether those entities are actually alive or not. It is quite possible to feel empathy toward a non-biological object, just as we do toward fictional characters in movies and books.

rob
 
Posts: 20
Joined: Sun Nov 09, 2008 5:29 pm
Location: San Francisco

Re: Average vs Total

Postby Arepo on 2008-11-12T19:09:00

rob wrote:
RyanCarey wrote:We'll all agree that death tends to involve suffering and, furthermore, it's bad for the family and the rest of the world to be deprived of this dying individual.

I would certainly hope you don't consider that suffering prior to death, or the sadness of those that miss the deceased, is the only thing that makes someone's death "bad". That would sort of go against common sense.


This is one of the main criticisms people level at hedonistic utilitarianism. Assuming you're a totalising hedonistic util, you have the further reason that someone's death prevents them from experiencing any further happiness. So you have good reason not to murder happy villagers.

This also gives to equal reason (other things being equal) to kill someone who's future happiness total is likely to be negative. But only if you can't do anything else to improve their happiness (without harming other people).

It's still a weaker injunction than most people feel comfortable with, but there are several important caveats that make it (IMO) easier to accept:

1) Fear of death is intense - it can inflict huge psychic harm on people, so it's an excellent reason not to give people the impression they're at risk of death.

2) People go to extremes to avoid death (see 1). In a pseudo-utilitarian society where people thought they might be murdered every time someone thought it might slightly benefit someone else, people would look after themselves and their family by whatever means necessary. Such a society would surely either produce a ruthless militaristic government that forcibly kept them in line, or regress quickly to constant tribal warfare. Neither outcome is likely to generate much happiness.

3) Util precludes killing in many cases that more traditional moral systems don't. For example, it makes no sense for a utilitarian to claim that someone 'deserves' to die. What is 'desert', if not a deontic fiction?

4) As an alternative to HU, preference utilitarians sometimes claim that, since dying would thwart our preference to not die (and all the preferences we have for doing things that involve not being dead), PU precludes murder more strongly than TU.

But if we say 'if someone exists now, we should prioritise their current interests, even if their existential status changes', the same logic seems to imply that we should say 'if someone doesn't exist now, we should prioritise their current [non-existent] interests even if their existential status changes'. In which case it would be fine to ruin the lives of people who don't exist yet (eg. by smoking when pregnant, polluting the environment in a way that will harm future generations etc).

Another problem is that if someone's interests remain relevant after their death, then every time we assess an action, we have to consider not only people who are and will be alive, but also the long dead.

5) Most importantly (to CUs), the idea of CU maps parsimoniously onto the real universe, without any need for moral concepts. We can look at a group of people and see they're happy (or at least that their brains and behaviour are consistent with what we understand of happiness), and we can look at the empty set and say it's neither happy nor sad.

But we can't look at a dead person and see that they have preferences. Nor can we look at a living person and find a physical trait that corresponds to 'importance' or 'significance'. We can say something like 'they have lots of happiness and we find happiness good, so it's good that they're alive'. And we can substitute 'happiness' with 'satisfied preferences' - but if that's all you're saying, their preferences give you no more reason to keep them alive than their happiness.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Average vs Total. Death

Postby RyanCarey on 2008-11-12T21:16:00

rob wrote:
RyanCarey wrote:We'll all agree that death tends to involve suffering and, furthermore, it's bad for the family and the rest of the world to be deprived of this dying individual.

I would certainly hope you don't consider that suffering prior to death, or the sadness of those that miss the deceased, is the only thing that makes someone's death "bad". That would sort of go against common sense.

I confess! You've pinpointed the most controvercial element of my ethical system. It seems that classical utilitarianism's treatment of death is so foreign and unintuitive that Arepo is experienced in defending it against others and himself.

Arepo's defences are all valid, of course. But I would put it more strongly. Creating ethics from intuition is bogus. Believing something doesn't make science true, so why should it make ethics true?

Why do we believe what we believe? Well I think we have valid and invalid reasons for believing things. Intuition is invalid. Evidence and reason alone are valid. Valid reasons are like the a radio signal that we are trying to listen to whereas invalid reasons are noise from the speakers of the car next to us that we are not interested in.

> When we discuss caring for our family, signal is not hard to pick up. That's because the car next to us is playing the same tune as ours. Evolution wants us to care for our tribe. Ethics demands that we do what will maximise happiness.
> When we discuss death, the car next to us is playing noise that is very loud and that does not reinforce our signal. Evolution presents inredible biases. It tells us that we don't want to die. It tells us that we should value making new life. It tells us that we don't want our tribe to die, nor be in a society in which people die regularly. We struggle with these ulterior motives. First, we need to acknowledge that they are there. We need to acknowledge that there are forces swaying us towards this pro-life stance. Second, we need to concentrate on the signal. And I think it tells us, like it always does, that wellbeing is what has been directly observed to be a good.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Average vs Total

Postby Arepo on 2008-11-12T22:50:00

Nicely put.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Average vs Total. Death

Postby rob on 2008-11-13T02:29:00

RyanCarey wrote:Well I think we have valid and invalid reasons for believing things. Intuition is invalid. Evidence and reason alone are valid.
Although in general I subscribe to this, I have yet to see a case made for utilitarianism that doesn't use intuition to justify its general "greatest happiness for all" concept. How it that better than having a "greatest suffering" goal, other than the former seems intuitively obvious? Or, somewhat less silly, why not an Ayn Randian "greatest happiness for each individual"? How do you get there without intuition?

You can go some pretty scary and tragic places if you throw away intuition without replacing it with some damn solid logic.

rob
 
Posts: 20
Joined: Sun Nov 09, 2008 5:29 pm
Location: San Francisco

Re: Average vs Total. Death

Postby Arepo on 2008-11-13T13:03:00

rob wrote:Although in general I subscribe to this, I have yet to see a case made for utilitarianism that doesn't use intuition to justify its general "greatest happiness for all" concept. How it that better than having a "greatest suffering" goal, other than the former seems intuitively obvious? Or, somewhat less silly, why not an Ayn Randian "greatest happiness for each individual"? How do you get there without intuition?

You can go some pretty scary and tragic places if you throw away intuition without replacing it with some damn solid logic.


This is part of what I'm hoping to answer in the utilitarianism: thread series. Two short answers, though:

1) If honest inquiry about ethics gives scary answers, then we need to get used to the scariness or give up inquiry altogether. To what end should we invent comforting fictions, if not the end prescribed by the very comforting fictions we're inventing?

2) One of the differences between HU and the alternatives is that, because HU is purely empirical, it doesn't need moral language. If we decide that 'morally good' means 'maximising happiness', we can simplify a sentence like 'You should do x' to something describing the universe 'Doing x will maximise happiness.'

The difference between util and egoism is that egoism generally has a prescriptive component that HU lacks. HU ethics comes, IMO, from two premises:
i) Insofar as people wish to do something *unselfish, the only sense in which action can be unselfish is HU.
ii) To some degree, most people wish to do unselfish things.

These two premises are both descriptions of the universe.

*(clarifying 'unselfish' is fiddly, but I'm trying to keep this short)

Egoism on the other hand, has only one premise (again, slightly simplified):
i) You should do selfish things.

This is sometimes placed together with
ii) People usually wish to do selfish things.

ii) is obviously true... but it's also got nothing to do with i). So i) has to stand alone. And like every other normative ethical system than HU, with the possible exception of nihilism (which I don't believe is an exception either, but which I won't argue with here), i) adds something undefined, seemingly unnecessary, and completely unfalsifiable to our assumptions and related conclusions about the nature of the universe.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Average vs Total. Death

Postby rob on 2008-11-13T16:31:00

Arepo wrote:1) If honest inquiry about ethics gives scary answers, then we need to get used to the scariness or give up inquiry altogether. To what end should we invent comforting fictions, if not the end prescribed by the very comforting fictions we're inventing?

Well I am not concerned about "comfort". I'm concerned about people rationalizing away their sense of something being wrong because it doesn't fit a simplistic model. I gave an example of a genocidal act that could be justified by simplistic utilitarianism. I.e. killing a whole village is ok if it doesn't cause pain or suffering....since you are only counting "happiness" and "suffering" (which, unless you define them as I do aren't really scientific concepts), you might ignore the fact that the villagers really would like to avoid death.

This same sort of problem happened with "social darwinism". People started to get this scientific concept of "survival of the fittest", and then erroneously mixed it with their intuitive notion of "nature is always right", and found themselves justifying eugenics. I see a lot of similar logic here.

rob
 
Posts: 20
Joined: Sun Nov 09, 2008 5:29 pm
Location: San Francisco

Re: Average vs Total. Death

Postby Arepo on 2008-11-13T18:20:00

rob wrote:Well I am not concerned about "comfort". I'm concerned about people rationalizing away their sense of something being wrong because it doesn't fit a simplistic model.


Ok, but then why the concern? If everyone (including you) rationalises away their sense of things being wrong, then what's wrong with the subsequent apocalypse?

I gave an example of a genocidal act that could be justified by simplistic utilitarianism. I.e. killing a whole village is ok if it doesn't cause pain or suffering....since you are only counting "happiness" and "suffering" (which, unless you define them as I do aren't really scientific concepts), you might ignore the fact that the villagers really would like to avoid death.


Well no, not on the totalising hedonistic utilitarian account. Their total happiness is positive, so their existence is a good thing.

This same sort of problem happened with "social darwinism". People started to get this scientific concept of "survival of the fittest", and then erroneously mixed it with their intuitive notion of "nature is always right", and found themselves justifying eugenics. I see a lot of similar logic here.


I'm always wary of the phrase 'similar logic'. In classic predicate logic, which most arguments seem to be based in, there are only about 19 rules of inference ('about' in that IIRC sometimes conditional proof is also used and in that they could probably rotated a bit, to provide a similar - and functionally identical - axiom list). Much argumentative logic is not only similar, but identical.

I think similar logic really means 'similar premises'. But what you're talking about here couldn't be more different - one of the conclusions of Darwinism that organisms change over time to suit their environment. Social Darwinism interprets this change as morally significant (again, finding adding something to the physical description of the universe), and then claimsthat we should get rid of organisms that we think are maladjusted.

But this is just bad logic - a straightforward error. If we allow people the starting premise that they can add 'shoulds' into discussions, they can reach any conclusion they please from any starting point. 'If a tennis ball falls to the floor when I drop it then tennis balls should fall to the floor when dropped' is similar logic to 'if tennis balls fall to the floor when dropped I should kill everyone in my neighbourhood' - it isn't similar logic to 'if gravity pulls on a ball and no hand is supporting it, it will fall to the floor'.

Condemning the principle of utility for the possibility of people using it to justify inflicting atrocities is similar logic to to condemning people for dropping tennis balls for the same possibility.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Average vs Total. Death

Postby rob on 2008-11-14T04:01:00

Arepo wrote:
rob wrote:Well I am not concerned about "comfort". I'm concerned about people rationalizing away their sense of something being wrong because it doesn't fit a simplistic model.

Ok, but then why the concern? If everyone (including you) rationalises away their sense of things being wrong, then what's wrong with the subsequent apocalypse?
You'd have a point if it did include me. It doesn't though.
I gave an example of a genocidal act that could be justified by simplistic utilitarianism. I.e. killing a whole village is ok if it doesn't cause pain or suffering....since you are only counting "happiness" and "suffering" (which, unless you define them as I do aren't really scientific concepts), you might ignore the fact that the villagers really would like to avoid death.

Well no, not on the totalising hedonistic utilitarian account. Their total happiness is positive, so their existence is a good thing.

Assuming you calculate it the way you do. Others don't.
Condemning the principle of utility for the possibility of people using it to justify inflicting atrocities is similar logic to to condemning people for dropping tennis balls for the same possibility.

I'm not condemning the principle of utility, I am saying "be careful". What i see here, a LOT, is inappropriate mixing of intuitive notions and scientific ones. Your insistance that consciousness is an important factor is a perfect example. You can't even define what it is, but you are mixing it into a supposedly scientific equation.

Another place I have seen huge errors in (black and white, overly simplistic) utilitarian calculations is trying to calculate animal happiness as equal to human (which I've seen here). That's just insane. I love my dog and all, but you know, a mosquito is an animal too. You sure you want to stick with that? My intuition tells me to prioritize the well being of a human more than a mosquito. If your logic says otherwise, thanks, but I'll stick with my intuition.

rob
 
Posts: 20
Joined: Sun Nov 09, 2008 5:29 pm
Location: San Francisco

Re: Average vs Total. Death

Postby TraderJoe on 2008-11-15T20:34:00

rob wrote:Another place I have seen huge errors in (black and white, overly simplistic) utilitarian calculations is trying to calculate animal happiness as equal to human (which I've seen here). That's just insane. I love my dog and all, but you know, a mosquito is an animal too. You sure you want to stick with that? My intuition tells me to prioritize the well being of a human more than a mosquito. If your logic says otherwise, thanks, but I'll stick with my intuition.

Any sentient being capable of suffering or experiencing discomfort ought to be considered equal to any other under utilitarianism, imo. However you determine its utility or happiness, I see no reason to award preference to a human over an animal which meets the criteria of the previous sentence. Were it not for strong societal pressure on me to choose otherwise, I would consider the life of a loved family pet to be more valuable than a hypothetical human in a vacuum [that is to say, one with nobody to miss him] provided both had equal life expectancies.
I want to believe in free will. Unfortunately, that's not my choice to make.
User avatar
TraderJoe
 
Posts: 54
Joined: Mon Oct 06, 2008 10:05 pm

Re: Average vs Total

Postby TraderJoe on 2008-11-15T21:41:00

rob wrote:Say you've got an isolated village of people happily living their lives. No one else in the world knows of these people or will miss them if they are gone. Does that mean it is ok to painlessly murder all of them at once, with a shot to the head, while they are sleeping? No one will miss them, and no one will be made unhappy or suffer.

Only someone out of their mind, or supremely evil, would think so. (in my ever so humble opinion! ;) )


*Gnashes teeth*
I wrote a long, well-written [imo ;)] reply to this explaining how you are depriving them of their future utility, and their descendants of any happiness they might have. Simply put, provided several trivial criteria are met, then the only question is whether their and their descendants' future utility outweighs the happiness you will gain from killing them. Normally, this would be expected to be the case. If not, most utilitarian theories would say you ought to kill them, even though this goes against our common sense.
I want to believe in free will. Unfortunately, that's not my choice to make.
User avatar
TraderJoe
 
Posts: 54
Joined: Mon Oct 06, 2008 10:05 pm

Re: Average vs Total. Death

Postby Arepo on 2008-11-16T00:42:00

rob wrote:You'd have a point if it did include me. It doesn't though.


The point is, people won't 'rationalise away' such concerns, unless someone somehow managed to prove that an apocalypse was fine and dandy. No-one has proved anything of the sort, and it's pretty much impossible to imagine how they might. I'm willing to say that if they did, I'd sooner accept the incontrovertible proof than offer my intuition as a rebuttal. Who cares? It's just not going to happen.

Assuming you calculate it the way you do. Others don't.


I don't know of any utilitarian who'd claim that it's not contrary to their ethical views that you kill a village of happy people. In fact, my views are the closest to that conclusion I've ever seen, and I certainly don't think so. Can you name someone who does?

I'm not condemning the principle of utility, I am saying "be careful". What i see here, a LOT, is inappropriate mixing of intuitive notions and scientific ones. Your insistance that consciousness is an important factor is a perfect example. You can't even define what it is, but you are mixing it into a supposedly scientific equation.


Using extreme caution to deal with extreme events seems perfectly sensible. Again, I don't really know of anyone who - in the real world - would disagree. When people pose extreme thought experiments, they usually do so to (try to) remove all the complicating factors of the real world that give us something to be careful about.

I don't think I've used the word 'scientific' to describe utilitarianism, incidentally (I have a very short memory so I could be wrong). As for consciousness, I've defined it as emotion. I'm willing to be persuaded that that's a counterproductive definition, but no-one's tried to persuade me of that yet. Emotion is something I don't see any desperate need to explain in order for my view to make sense - I clearly have it, other people clearly have it, and some members of the animal kingdom clearly have it, gradually becoming less clear as the animals in question become less complex.

If this is inappropriate, it's not more so than introducing the idea that unconscious/unemotional objects somehow lose out from failing to achieve something that we've anthropomorphised into a goal.

I really don't know what sort of prescription you're offering. How do we gauge the value of helping/harming a human vs the value of 'helping'/'harming' a robot if not via the emotion involved?

Another place I have seen huge errors in (black and white, overly simplistic) utilitarian calculations is trying to calculate animal happiness as equal to human (which I've seen here). That's just insane. I love my dog and all, but you know, a mosquito is an animal too. You sure you want to stick with that?


Stick with what? I don't think I've called animal happiness 'equal to' anything. If I have, I shouldn't have. But I still don't really know what you want to say here. Both dogs and mosquitoes are animals, sure. So are humans. So valuing the welfare of one kind of animal obviously doesn't entail valuing the welfare of all animals equally - if it did, you'd be the one defending mosquitoes.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Average vs Total

Postby Arepo on 2008-11-16T00:46:00

TraderJoe wrote:*Gnashes teeth*
I wrote a long, well-written [imo ;)] reply to this explaining how you are depriving them of their future utility, and their descendants of any happiness they might have.


Where is it? Did you manage to delete it somehow?

Simply put, provided several trivial criteria are met, then the only question is whether their and their descendants' future utility outweighs the happiness you will gain from killing them. Normally, this would be expected to be the case. If not, most utilitarian theories would say you ought to kill them, even though this goes against our common sense.


You also have to factor in how much suffering you cause by killing them. Maybe that's one of your trivial criteria, but in the real world it's certainly not trivial to kill large numbers of people without harming them! Oh, another thing is whether, using the resources that they would have, other sentient lifeforms will replace them, and if so how much happiness you'd expect them to have instead.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Average vs Total

Postby redcarded on 2008-11-16T12:20:00

Lets also not forget the unhappiness/fear/stress on society in general when they learn that there is a group of mad philosophers whipping out isolated villagers, or people with less than 50 'happiness points'. As a general rule, overall utility would be severally handicapped for constant fear as whether you were happy enough and making sure that everyone knew you in society and would grieve at your death.
User avatar
redcarded
 
Posts: 41
Joined: Thu Nov 13, 2008 11:34 pm
Location: Canberra, Australia

Re: Average vs Total

Postby faithlessgod on 2008-11-16T14:56:00

Hi Lord T, interesting discussion here is my two pennies worth:

Lord Tonberry wrote:Hi first post, I'd to ask where other utilitarians stand on the Average Utilitarianism vs Total Utilitarianism debate.

First I am a Desire Utilitarian which is one sentence says: "Morality is about using praise, condemnation, reward and punishment to increase desires that tend to fulfil desires and decrease desires that then to thwart desires". This a reductively natural and empirical ethical framework. Good, bad etc are reduced to the already natural desires and their material effects on the fulfilment and thwarting of all desires.

Lord Tonberry wrote:Is it better to work for the average well-being or the total sum well-being?

I think I have issues with AU and TU both but not the usual objections. Lets see...

Looking at Rob's example might help explain the DU approach to this. The desire under question here - however else it has been justified or whatever other desires exists - is a desire to painlessly kill all the members of this village. That is the conditions of fulfilment of this desire is the state of affairs where the village inhabitants are killed painlessly is true. We evaluate a desire by its effect on all desires that could be effected. In this case the scenario is designed so that no desires other than that of the villagers is affected. We compare actions and the material affects that result from the desire being fulfilled to it not being acted upon. In this case when it is fulfilled many other desires are being thwarted - those of the to be dead villagers. In the case of it not being enacted there is no ensuing desire thwarting. Therefore specifically according to DU, holders of this desire are to be condemned and punished. To use the optional 'moral speak' could say it is "morally bad" - which is the same as saying it increases the tendency for desires to be thwarted. (As an ethical naturalist I don't need to use 'moral speak', it is a useful shorthand provided it does not cloud the issue)

How does AU or TU affect this? In DU the numbers of holders of the relevant desires and the numbers of those who are affected is moot. The issue of the desire-desire interaction can be evaluated independent of the demographic bias - an accident of history. By demographic bias I mean that to allow that an action's status can vary as to whether it is right or wrong according to the current distribution of desires, preferences, happiness or other utility is to render such a system prone to such a bias, DU avoids this. Hence the use of AU or TU in those calculations does not eliminate such demographic bias - although one might be able to ameliorate it than other.

So DU avoids the basic issue of AU vs TU (but, of course, in extreme and unusual circumstances the opposing numbers do matter, Rob's was not such an example).

Lord Tonberry wrote:I think most would lean to the average as it's the emotionally 'right' answer as total utilitarianism would lead to what many would consider unsavoury ethics.

I am always dubious of intuitionist and emotivist arguments, as history has repeatedly shown our intuitions and emotions have been mistaken for too often.

Lord Tonberry wrote: Most famous is the 'repugnant conclusion'. Maybe instead of trying to improve the lives of citizens governments should just simply encourage them to reproduce, possible even mandate it?

But not at the cost of thending to thwart other desires , even if considering only welfare interests (which I use as a naturalistic basis to 'rights'), hence I do not need an AU argument or emotive reaction to repel such a TU argument.

Lord Tonberry wrote: Also despite the most famous utilitarians being against meat-eating it could be argued from a TU view that if there wasn't a meat-industry those animals would have never been born.

Hence this can be used neither for or against TU and AU so is, I think, misleading or irrelevant?

Lord Tonberry wrote:Despite those ethical dilemmas I'm a total utilitarian (and indeed a vegetarian). I don't understand what's so special about an individual being happy as opposed to that happiness being spread across many individuals. Many of the arguments for AU just strike me as similar to those arguments against utilitarianism in general - they appeal to the gut . To me the conclusion of AU, where it would be better to have one extremely happy person alive as opposed to many simply very happy people, is far more repugnant.

Whilst I am beginning to think the AU vs TU is a red herring - at least for DU - can I add that there is more than one way to aggregate/average - mean, mode, median etc. - so your conclusion does not follow from any version of AU, maybe just the one you have chosen based on a suitable average function. How do we determine which one is correct?
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: Average vs Total. Death

Postby RyanCarey on 2008-11-17T11:59:00

Good point, redcarded. That's the crux of the matter as far as I'm concerned. Killing sad people isn't going to help our society to be happy. It reminds me of a horror episode of the simpsons in which ned flanders is a godly big-brother kind-of figure who punishes those who aren't happy. It's silly.

TraderJoe wrote:
rob wrote:Another place I have seen huge errors in (black and white, overly simplistic) utilitarian calculations is trying to calculate animal happiness as equal to human (which I've seen here). That's just insane. I love my dog and all, but you know, a mosquito is an animal too. You sure you want to stick with that? My intuition tells me to prioritize the well being of a human more than a mosquito. If your logic says otherwise, thanks, but I'll stick with my intuition.

Any sentient being capable of suffering or experiencing discomfort ought to be considered equal to any other under utilitarianism, imo. However you determine its utility or happiness, I see no reason to award preference to a human over an animal which meets the criteria of the previous sentence. Were it not for strong societal pressure on me to choose otherwise, I would consider the life of a loved family pet to be more valuable than a hypothetical human in a vacuum [that is to say, one with nobody to miss him] provided both had equal life expectancies.


I think you both write well, and I think you're both right and wrong. Utilitarianism doesn't apply to only some species but not others. But it doesn't disregard the differences in the ways different animals experience pain either. If we read a famous quote of Jeremy Bentham,
Jeremy Bentham wrote:... a full-grown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day, or a week, or even a month, old. But suppose they were otherwise, what would it avail? The question is not, Can they reason? nor Can they talk? but, Can they suffer?


If I could add some writing from Singer's "Equality for animals" chapter,
Peter Singer wrote:If a being suffers, there can be no moral justification for refusing to take that suffering into consideration.

However, We need to temper this "can they suffer" mentality, with science. How much do they suffer? To quote Singer again,
Peter Singer wrote:If, for instance, we decided to perform extremely painful or lethal scientific experiments on normal adult humans, kidnapped at random from public parks for this purpose, adults who entered parks would become fearful that they would be kidnapped. The resultant terror would be a form of suffering additional to the pain of the experiment. The same experiments performed on nonhuman animals would cause less suffering since the animals would not have the anticipatory dread of being kidnapped and experimented upon. This does not mean, of course, that it would be right to perform the experiment on animals, but only that there is a reason, which is not speciesist, for preferring to use animals rather than normal adult humans, if the experiment is to be done at all.

I agree that this must be the ethical system with which we approach animals. It's about what it feels like to be on the end of our actions.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Average vs Total

Postby TraderJoe on 2008-11-18T21:39:00

I agree heartily with the quotations from the post above. Also worth bearing in mind is that humans suffer more from losses of other humans than animals do - at least, I'd imagine that I would suffer more if my dad were to die than my hypothetical dog would if his mother was run over. So I tend to prefer the vacuum idea - a man and a dog living in a bubble somewhere, consuming zero resources, each with equal life expectancy...haven't actually seen it before, but it's one I personally use when trying to advocate animal rights.

Arepo wrote:
TraderJoe wrote:*Gnashes teeth*
I wrote a long, well-written [imo ;)] reply to this explaining how you are depriving them of their future utility, and their descendants of any happiness they might have.


Where is it? Did you manage to delete it somehow?

Yes. Smoothly does it. I should have learned by now to type long posts in Word, but mistakes are made every now and then...

Arepo wrote:
TraderJoe wrote:Simply put, provided several trivial criteria are met, then the only question is whether their and their descendants' future utility outweighs the happiness you will gain from killing them. Normally, this would be expected to be the case. If not, most utilitarian theories would say you ought to kill them, even though this goes against our common sense.


You also have to factor in how much suffering you cause by killing them. Maybe that's one of your trivial criteria, but in the real world it's certainly not trivial to kill large numbers of people without harming them! Oh, another thing is whether, using the resources that they would have, other sentient lifeforms will replace them, and if so how much happiness you'd expect them to have instead.

That was indeed one of my trivial criteria. I think it had already been said that they would all be shot in the head while asleep, thereby painlessly ending their lives. The issue of resources is also a criterion I'd considered implicit, though I think in retrospect that I should have spelled this bit out - my bad for failing to do so.
I want to believe in free will. Unfortunately, that's not my choice to make.
User avatar
TraderJoe
 
Posts: 54
Joined: Mon Oct 06, 2008 10:05 pm

Re: Average vs Total

Postby DanielLC on 2009-01-06T20:13:00

I'm favor working for total well-being, though I'm not totally convinced. The reason I think AU has merit is related to the anthropic principle and related to a premise of the Doomsday argument. It's rather confusing. I will attempt to explain it with a vague rant that is almost, but not quite, entirely nonsensical.

You are exactly one person. You cannot be two people, you cannot be nobody. We want to maximize your expected utility. Increasing the number of people would be pointless, as you'll be exactly one of them no matter how many there are. We can't just work on one person's utility, as you could be anyone. We must therefore work to increase the average.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Average vs Total

Postby faithlessgod on 2009-01-06T23:11:00

Now this thread has woken up (I hope the rest of the forum does too) I might as well clarify better why I think desire utilitarianism (DU) does not suffer from the problems of total and average utility. DU is consequentialist in that it seeks to promote value, utilitarian in that it counts everyone as one and no more than one in the promotion of value but it does not do this by directly maximising utility (however defined). In DU utility would be conceived as desire fulfilment but DU is not desire fulfilment act utilitarianism (which does maximise the utility of desire fulfilment). Instead the evaluation focus is on desires not acts (or rules). It seeks to promote desires that tend fulfil other desires and inhibit desires that tend to thwart other desires, this could be considered a first order derivative of utility - showing the best direction to go to promote value or a method to reduce directly friction between clashing desires - the reduction of friction thereby freeing up resources to help fulfil desires thereby promoting value.

I have no problem considering that such (extrinsic) value as neither fungible nor commensurate nor even comparable. Indeed I would agrgue that explicitly measuring such a utility is an impractical challenge, it is indeterminate as Mackie says. Hence the issue of average versus total utility does not apply to DU. Indeed I think the debate here assumes that utility is determinate and at least commensurable if not fungible (fungibility being a particular feature of average versus total util debates I think?) and this I dispute (but maybe not for this thread) and this is one of the reasons I prefer DU.
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK


Return to General discussion