Utilitarianism/consequentialism

Whether it's pushpin, poetry or neither, you can discuss it here.

Utilitarianism/consequentialism

Postby Ubuntu on 2010-09-21T16:36:00

Is utilitarianism synonymous with consequentialism? What other consequentialist theories are there?

Ubuntu
 
Posts: 162
Joined: Tue Sep 07, 2010 1:30 am

Re: Utilitarianism/consequentialism

Postby RyanCarey on 2010-09-22T08:42:00

Utilitarianism is a type of consequentialism. Probably most consequentialists are utilitarians.

Consequentialism means judging acts by their consequences.

Utilitarianism means judging acts by their utility (i.e. usefulness). Utilitarians tend to restrict themselves to one type of consequence.
> Classical utilitarians believe in improving wellbeing.
> Preference utilitarians believe in satisfying people's preferences
> Negative utilitarian believes in reducing suffering.

Consequentialism is a broader school of thought. Here are some varieties:
> Egoists believe in improving their own wellbeing and are uninterested in others' wellbeing.
> Prioritarians believe in promoting wellbeing, giving special consideration to worse off individuals. Prioritarians combine concern for wellbeing with concepts of justice.
> other consequentialists may mix and match from wide ranges of outcomes. They may hope for wellbeing. They may value justice. Perhaps they regard life as sacrosanct, and to be preserved at all costs. etc.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Utilitarianism/consequentialism

Postby Ubuntu on 2010-09-22T18:09:00

What is the difference between saying that an act is moral according to it's usefulness in achieving a desirable consequence and saying that an act is moral based on it's consequences? I believe that happiness is the only thing that is intrinsically valuable and suffering is the only thing that is intrinsically dis-valuable, so I view an action is ethical only if it increases happiness or minimizes stress. What am I? I also believe that while increasing happiness is morally desirable, it isn't necessary, only minimizing suffering is morally necessary ( something is only 'necessary' if it's needed to avoid a negative consequence, being unconscious is of neutral value so, even if it's not desirable or good like happiness is, a universe without sentient beings would not be bad or dis-valuable). Am I a classic utilitarian or a negative utilitarian or what? I don't think negative hedonism is a drastically different idea than the hedonism of classical utilitarians, I think all hedonist views are based on empathy but stress clearly triggers a greater empathetic response than happiness does. Negative hedonism (I think this is a better term since not all utilitarians are hedonists) is only really relevant when it comes to creating or ending sentient life, otherwise, what increases a person's happiness will necessarily minimize their stress and vice versa. Of course we should prioritize giving food to hungry children rather than giving toys to children who already have decent lives but even a 'positive' hedonist could argue that doing so would cause the hungry child more happiness than giving a toy to the already happy child would.

Ubuntu
 
Posts: 162
Joined: Tue Sep 07, 2010 1:30 am

Re: Utilitarianism/consequentialism

Postby DanielLC on 2010-09-22T20:44:00

I think the difference is utilitarianism has additional axioms. According to utilitarianism:
An action is equally moral no matter who takes it: This gets rid of egoism and altruism, where it's different if you do something to help you or someone else does.
Morality adds linearly: if it's good to have a life barely worth living, and it's good to have extravagant wealth, having a person with each would be as good as both of those put together. This gets rid of Prioritarianism.

That doesn't seem quite right because average utilitarianism isn't exactly linear. Having someone who's life is barely worth living on its own and having someone with extravagant wealth put together has a utility equal to the average of the two.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitarianism/consequentialism

Postby Arepo on 2010-09-22T21:36:00

Probably most consequentialists are utilitarians.


In my experience this isn't true. Most consequentialists who apply a consistent algorithm to their morality are utilitarians (I'd guess), but there are several who just describe themselves as being interested in consequences without wanting to commit to some of the more counterintuitive conclusions of util.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Utilitarianism/consequentialism

Postby Ubuntu on 2010-09-22T23:54:00

An action is equally moral no matter who takes it: This gets rid of egoism and altruism, where it's different if you do something to help you or someone else does.


I don't understand.

Morality adds linearly: if it's good to have a life barely worth living, and it's good to have extravagant wealth, having a person with each would be as good as both of those put together. This gets rid of Prioritarianism.


I don't think I understand this either :(

That doesn't seem quite right because average utilitarianism isn't exactly linear. Having someone who's life is barely worth living on its own and having someone with extravagant wealth put together has a utility equal to the average of the two.


I strongly disagree with the idea that one person's happiness somehow 'balances' out another person's suffering.

In my experience this isn't true. Most consequentialists who apply a consistent algorithm to their morality are utilitarians (I'd guess), but there are several who just describe themselves as being interested in consequences without wanting to commit to some of the more counterintuitive conclusions of util.


What counter-intuitive conclusions does utilitarianism necessarily lead to that consequentialism does not?

Thanks for the replies, by the way.

edit : after researching a bit more on prioritarianism, I think it's the school I most identify with (prioritarians are hedonists, right?).

Ubuntu
 
Posts: 162
Joined: Tue Sep 07, 2010 1:30 am

Re: Utilitarianism/consequentialism

Postby Arepo on 2010-09-23T19:27:00

Ubuntu wrote:I don't understand.

I don't think I understand this either :(


I found Daniel's phrasing confusing here, but he's just describing the claims utilitarianism makes that distinguish it from egoism and altruism in the first case, and prioritarianism in the second.

I strongly disagree with the idea that one person's happiness somehow 'balances' out another person's suffering.


As far as I know everyone here disagrees with the averaging method (which you could in theory apply to any type of consequentialism), though you might find the more popular summing method equally repugnant.

What counter-intuitive conclusions does utilitarianism necessarily lead to that consequentialism does not?


The point is that utilitarianism gives you, in theory, an algorithm. You feed data (eg circumstances) into the algorithm and it produces an answer. Consequentialism is just an umbrella name for any moral view that concerns itself exclusively with consequences, so it isn't necessarily algorithmic or even consistent.

Eg while utilitarians must, all things being equal, support torturing a baby for eternity if it made the rest of humankind eternally happy (a classic thought-experiment supposed to make util sound impalatable - though all things are inevitably not equal), a non-specific consequentialist might just assert 'that's an obviously awful consequence, which we should try to avoid'.

edit : after researching a bit more on prioritarianism, I think it's the school I most identify with (prioritarians are hedonists, right?).


Like utilitarianism, prioritarianism describes what you want to do with utility, not (necessarily) what utility is. I would guess prioritarians are fairly split between hedonists and preferencists since they're similar to utilitarians in many other comparable respects.

Thanks for the replies, by the way.


No worries.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Utilitarianism/consequentialism

Postby Ubuntu on 2010-09-23T23:52:00

As far as I know everyone here disagrees with the averaging method (which you could in theory apply to any type of consequentialism), though you might find the more popular summing method equally repugnant.


So what distinguishes prioritarianism from utilitarianism?

The point is that utilitarianism gives you, in theory, an algorithm.


You mean the idea of 'net happiness' or 'net suffering'?

Eg while utilitarians must, all things being equal, support torturing a baby for eternity if it made the rest of humankind eternally happy (a classic thought-experiment supposed to make util sound impalatable - though all things are inevitably not equal), a non-specific consequentialist might just assert 'that's an obviously awful consequence, which we should try to avoid'.


This is what has always turned me off from utilitarianism (this and the idea of adding or subtracting 'net pleasure/stress' rather than individual pleasure/stress). I believe that happiness and suffering are obviously asymmetrical in value and that causing someone to suffer is only necessary if it prevents them or other people from suffering to an even greater extent. I don't understand the idea that happiness is 'necessary' (not just desirable but a must) or that pursuing a positive is as equally valuable as avoiding a negative. Hedonist- prioritarianism seems to be most insync with my views but I will continue to think of myself as a hedonist if anyone asks and I don't have time to go into detail.

Ubuntu
 
Posts: 162
Joined: Tue Sep 07, 2010 1:30 am

Re: Utilitarianism/consequentialism

Postby DanielLC on 2010-09-24T04:31:00

So what distinguishes prioritarianism from utilitarianism?


In Prioritarianism, it's okay to cause a net decrease in happiness if you spread it out more.

causing someone to suffer is only necessary if it prevents them or other people from suffering to an even greater extent.


So, you'd let me punch you in the face to keep me from kicking you in the shins, but not to, uh, what's a generic example of positive utility?

Would you work to get money to go to a movie? The former causes suffering (assuming you don't particularly like your job), but the latter doesn't prevent it.

Also, if you think you need to minimize suffering before you work on happiness, you're essentially a negative utilitarian. It's always possible to sacrifice enough pleasure to decrease a little bit more pain. At least, until you take the idea to its logical conclusion.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitarianism/consequentialism

Postby Ubuntu on 2010-09-24T17:33:00

To my understanding, negative utilitarians argue that there is no moral reason to increase happiness for it's own sake. I disagree with them, I think that maximizing as much happiness as possible is morally desirable but only minimizing stress is morally necessary (stress is of negative value, happiness is of positive value and being emotionless is of neutral value, it's not desirable or bad. Something is only necessary if it's needed to avoid a negative outcome, that's what 'necessary' means). Let's say a magic genie came up to you and offered you a bet, if he flips a coin and it lands on heads you suffer the worst, unimaginable stress that the human nervous system is capable of and you will feel this stress every waking moment for the rest of your life (think being clinically depressed mixed in with what it must feel like to starve or be in the final stage of AIDS or cancer) but if it lands on tails, you will experience the greatest euphoria and joy that any drug could possibly produce. Are you willing to take this risk? You might want to have a billion dollars but if you already have enough money to pay for rent/mortgage, food, clothes, entertainment etc., do you need a billion dollars? Not only is happiness not necessary but it's not as practical of a moral goal as minimizing suffering is, since everyone is eventually going to die and all happiness will end. If maximizing happiness is necessary in the same way that avoiding stress is, we should be constantly grieving for every possible sentient being that could have come into exist but never will.

So, you'd let me punch you in the face to keep me from kicking you in the shins, but not to, uh, what's a generic example of positive utility?


I might. The more pleasure someone experiences, the less sensitive they are to stress (they actually did a study on this, being exposed to high levels of stress on a continual basis conditions the brain to respond more easily to stress). Pleasure is not only intrinsically valuable, it's also instrumentally valuable in minimizing stress so whatever pleasure compensates for being punched in the face or working overtime at a job I dislike would prevent a lot of future stress even if it wasn't necessary to avoid a specific, negative outcome. I would not let you kick me in the shin or work overtime if the stress that it caused was unbearable. There is a such thing as 'unbearable' stress, there is no such thing as too little happiness unless the absence of happiness is stressful.

Ubuntu
 
Posts: 162
Joined: Tue Sep 07, 2010 1:30 am

Re: Utilitarianism/consequentialism

Postby Arepo on 2010-09-24T23:58:00

Ubuntu wrote: If maximizing happiness is necessary in the same way that avoiding stress is, we should be constantly grieving for every possible sentient being that could have come into exist but never will.


A lot of people have this funny idea about util that it demands you get emotionally invested in it. But listen to that sentence - the greatest happiness principle demands that one should grieve constantly?

Surely that can't be right - better to say that (if one is a totalising utilitarian) we will try to maximise happiness, but also aim to be as sanguine as possible about whatever result we actually end up with.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Utilitarianism/consequentialism

Postby DanielLC on 2010-09-25T02:08:00

If happiness doesn't have enough value to outweigh sadness, it will never be worth maximizing for it. You can always do something that has some tiny chance of making a tiny decrease in sadness. That thought you put into the idea that happiness even exists is a waste of energy.

That's just how economics works. It's like how you can't find two kinds of currency without an exchange rate, even if one's for some obscure third-world country and the other's for an MMORPG.

I figure the fact that people do respond to happiness to be proof that it does outweigh sadness.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitarianism/consequentialism

Postby Ubuntu on 2010-09-25T23:28:00

If happiness doesn't have enough value to outweigh sadness, it will never be worth maximizing for it.


I'm not sure I understand.

You can always do something that has some tiny chance of making a tiny decrease in sadness
.

Did you mean increase?


That thought you put into the idea that happiness even exists is a waste of energy.


But thinking about happiness doesn't cause me distress.

That's just how economics works. It's like how you can't find two kinds of currency without an exchange rate, even if one's for some obscure third-world country and the other's for an MMORPG.


I don't follow (maybe because I'm clueless about economics).

I figure the fact that people do respond to happiness to be proof that it does outweigh sadness.


I don't understand the conflict in saying that happiness is desirable but unnecessary. Do you believe that we need to experience happiness? If so, why do we need to experience happiness anymore than cars, laptops, stones etc. do? Surely you recognize that people in unimaginable agony need to be free from it.

To help put my world view in perspective, I'm an anti-natalist/abolitionist. I believe that it's morally irresponsible to bring beings in to the world who might suffer great pain but my hope is that science will (after coming up with a solution for global warming) develop the ability to genetically engineer human beings who are hyper-empathetic and in a constant state of euphoria and love, incapable of any form of non-trivial stress. I would much prefer a universe with many happy sentient beings, but there would be nothing wrong with a universe where no consciousness existed.

Considering it's aggregate nature, why can't prioritarianism be considered a branch of utlitarianism, like NU is?

Ubuntu
 
Posts: 162
Joined: Tue Sep 07, 2010 1:30 am

Re: Utilitarianism/consequentialism

Postby DanielLC on 2010-09-26T03:30:00

non-trivial stress


There are two possibilities here:
[*]You believe that it just takes a whole lot of happiness to be better than sadness, so lives in euphoria that experience tiny stress would still be worth living, or
[*]You believe that trivial stress is its own class separate from non-trivial stress.

The first one essentially means that there was a misunderstanding and the entire argument was pointless.

With the second one, the argument still applies. If you have humans around, there's some chance they'll start causing stress again, or you may have just misunderstood stress from the beginning. And this is in addition to new paradoxes. You may also consider trivial probabilities of stress like trivial stress, which also has its own paradoxes.

If so, why do we need to experience happiness anymore than cars, laptops, stones etc.

What makes you think they don't need to? I think it's just as important for a stone to experience happiness as for a human. It's just a lot harder to make the stone succeed, so I worry about it less.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitarianism/consequentialism

Postby Ubuntu on 2010-09-26T23:23:00

DanielLC wrote:
non-trivial stress


There are two possibilities here:
[*]You believe that it just takes a whole lot of happiness to be better than sadness, so lives in euphoria that experience tiny stress would still be worth living, or
[*]You believe that trivial stress is its own class separate from non-trivial stress.

The first one essentially means that there was a misunderstanding and the entire argument was pointless.

With the second one, the argument still applies. If you have humans around, there's some chance they'll start causing stress again, or you may have just misunderstood stress from the beginning. And this is in addition to new paradoxes. You may also consider trivial probabilities of stress like trivial stress, which also has its own paradoxes.

If so, why do we need to experience happiness anymore than cars, laptops, stones etc.

What makes you think they don't need to? I think it's just as important for a stone to experience happiness as for a human. It's just a lot harder to make the stone succeed, so I worry about it less.


I think I've realized how arbitrary my drawing a line between unbearable stress and manageable stress is (although I still think I can discount negligible stress), I go over this all the time in my mind so who knows what I'll believe tomorrow. I can safely say that I will never accept the idea that any amount of happiness is worth the unbearable misery caused by Hitler's holocaust or the suffering of non-human animals in factory farms and science laboratories, starving children, people with extreme, clinical depression etc. Would it be consistent for me to say that no amount of happiness is worth even manageable, non-trivial stress or could I get away with your first option (needing a high amount of happiness to outweigh a much lower amount of stress)? Either way, I think I could still accept being punched in the nose for a million dollars or any pleasure that is instrumentally useful in decreasing stress even if it's intrinsic value wouldn't be worth the stress it decreases. A million dollars would alleviate the stress of work, debt etc.

I don't see how you can believe that stones need to be happy but I can't argue against it on the basis that it seems impractical or counter-intuitive. I truly believe that science can eliminate all non-trivial stress and happiness will be the only emotional experience in the universe, I hope for this more than anything in the world. It may be less practical to completely eliminate the suffering of 'wild' animals but I'm sure it can be done for all humans.

Ubuntu
 
Posts: 162
Joined: Tue Sep 07, 2010 1:30 am

Re: Utilitarianism/consequentialism

Postby DanielLC on 2010-09-27T04:53:00

The problems is, unless you are totally certain, there is a chance that there will be another Holocaust. The only way to eliminate that chance is to eliminate life.

This reminds me of the view that negligable amounts of utility can never outweigh significant amounts. For instance, it won't be worth being tortured to save 3^^^^3 people from getting a tiny speck of dust in their eye. http://lesswrong.com/lw/kn/torture_vs_dust_specks/ I believe this is an artifact of peoples' inability to comprehend that many people. It's believed you can only imagine about 150 people. You're not imagining 3^^^^3 people, you're imagining 150, and it's not worth torture to save 150 people from dust specks.

I think it's not so much that I believe stones need to be happy as that I don't believe in "need". Something is either better or worse. "Need" implies its absence is unthinkable. But pain certainly can happen; it's just better if it doesn't.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitarianism/consequentialism

Postby Daniel Dorado on 2010-09-28T04:09:00

Hi Ubuntu. I think you will find useful this article about prioritarianism and egalitarianism: http://www.law.upenn.edu/academics/inst ... emkin2.pdf

IMO utilitarianism, prioritarianism and egalitarianism have good points, and I'm pretty unsecure about which of that possibilies to choose. So I'm just an hedonist. I think in most real and possible cases, the difference is low.

On the other hand utilitarianism uses to be more counter-intuitive than prioritarianism and egalitarianism, so perhaps it's better to promote an prioritarian or egalitarian approach for to get a world with more happiness and less suffering. Counter-intuitive ideas can get that a lot of people reject consequentialism as a whole.
User avatar
Daniel Dorado
 
Posts: 107
Joined: Fri Dec 25, 2009 8:35 pm
Location: Madrid (Spain)

Re: Utilitarianism/consequentialism

Postby Ubuntu on 2010-09-28T19:33:00

DanielLC,

Maybe, but most people will continue to reproduce and there is no way a project to eliminate all sentient life could be undertaken without causing severe stress so I view paradise engineering as the best way to ensure that there won't be another Holocaust.

'Saving' 6 billion people from getting a speck in their idea would not justify torturing someone since torturing someone causes far more stress than simply having a speck in your eye.


I agree that being happy is better than being unconscious but there's nothing wrong with being unconscious. I don't see how you can deny that avoiding a negative is a 'need' and achieving a positive is just desirable.

DanielD,

Thanks for the link. I'm still completely confused by prioritarianism.

To further sharpen the difference between utilitarianism and prioritarianism, imagine a two-person society: its only members are Jim and Pam. Jim has an extremely high level of well-being, is rich, and lives a blissed-out existence. Pam, by contrast, has an extremely low level of well-being, is in extreme poverty, living a hellish existence. Now imagine that we have some free resources (say, $10,000) that we may distribute to the members of this society as we see fit. Under normal circumstances, due to the diminishing marginal utility of money, the $10,000 will generate more well-being for Pam than it will for Jim.

Thus, under normal circumstances, a utilitarian would recommend giving the resources to Pam. However, imagine that Jim, for whatever reason, although already filthy rich and very well-off, would gain just as much well-being by receiving the $10,000 as would Pam. Now, since it makes no difference in terms of overall well-being who gets the $10,000, utilitarians would say it makes no difference at all who gets the $10,000. Prioritarians, by contrast, would say that it is better to benefit Pam, the worse off individual.


If Pam is the worse off individual, how can giving her the money not be more beneficial than giving it to Jim? If Jim would benefit from it more than how can Pam possibly be considered the worse off individual?? How do prioritarians define 'benefit' or 'well-being'? Could it be possible that Jim will 'benefit' more despite Pam suffering more if she doesn't have it?

Ubuntu
 
Posts: 162
Joined: Tue Sep 07, 2010 1:30 am

Re: Utilitarianism/consequentialism

Postby Ubuntu on 2010-10-15T00:36:00

Not to rehash an old thread but, in reply to the specks argument, the stress of having a speck in your eye would be no more or less intense based on how often you've dealt with it it before, even if you've dealt with it every single day of your life prior. Most people would opt for one speck of dust in their eye every day for the rest of their life over being brutally tortured just one time. Maybe I'm missing something but, even if you do accept the idea of aggregate utility, I don't see how a speck of dust in the eye of 6 billion people compares to the brutal torture of one.

Ubuntu
 
Posts: 162
Joined: Tue Sep 07, 2010 1:30 am

Re: Utilitarianism/consequentialism

Postby DanielLC on 2010-10-15T03:53:00

6 billion was someone misreading something. The figure I gave was 3^^^3. 6 billion people being brutally tortured for 6 billion years doesn't compare to the aggregate disutility of 3^^^3 dust specks.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitarianism/consequentialism

Postby Ubuntu on 2010-10-18T20:07:00

I still don't see how it follows if you're adding up trivial experiences that occur in isolation.

Edit : Can prioritarianism be considered a branch within negative utilitarianism? Does negative utilitarianism necessarily deny that happiness has any intrinsic, positive value or only that our first obligation is to minimize suffering?

Ubuntu
 
Posts: 162
Joined: Tue Sep 07, 2010 1:30 am

Re: Utilitarianism/consequentialism

Postby Richard Pearce on 2010-10-22T12:02:00

As far as I know, negative utilitarianism does deny that happiness has intrinsic value. Nevertheless, since mammals either feel pain, happiness negative utilitarians consider happiness. For example, negative utilitarianism would advocate letting farmed chickens forage, so that they would avoid boredom. It would also advocate encouraging people and other animals to spend time with family and friends or whatever brings them happiness, so that they would avoid loneliness or boredom. So the negative utilitarian ethic is not as morose as one would imagine. For as long as their are 'normal' genetically unmodified mammals, happiness will always be deeply tied to negative utilitarianism. It could not be otherwise, because if a mammal goes for long enough without feeling happiness, it will become depressed. So happiness must be promoted in order for suffering to be minimised.
It is important though that negative utilitarianism does not give value to happiness, because if it did, then negative utilitarians would advocate reproduction and 'saving' unsentient entities into becoming living beings that feel both pleasure and pain. Even the utilitarian Peter Singer, however, thinks it an absurd notion to want to 'save' non-sentient entities into becoming living beings.
The question whether negative utilitarians give happiness inherent value is a useful one of Ubuntu to bring up, otherwise someone new to negative utilitarianism might think that negative utilitarians are grey cardigans who only think about suffering. (Nor do I give grey inherent negative value. It can be elegant in the evening.) No. Negative utilitarianism must consider happiness in their consideration of the living because the absence of happiness in a mammal will result in suffering.

Richard Pearce
 
Posts: 32
Joined: Mon Sep 20, 2010 3:10 pm

Re: Utilitarianism/consequentialism

Postby Arepo on 2010-10-22T16:51:00

Richard Pearce wrote:As far as I know, negative utilitarianism does deny that happiness has intrinsic value.


Strictly speaking it can grant happiness value, so long as that value is infinitessimal next to suffering's (negative) value. In practice, the only person I know of who self-identifies as an NU is David Pearce, who seems very much concerned with promoting happiness. I don't really know how he weighs it compared to suffering reduction. Most of what I've read by him fits your idea removing suffering without just killing someone equates to making them happier - but I doubt he'd say that's necessarily true.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Utilitarianism/consequentialism

Postby Ubuntu on 2010-10-25T18:23:00

Even the utilitarian Peter Singer, however, thinks it an absurd notion to want to 'save' non-sentient entities into becoming living beings.


If the idea of euthanizing someone in the final stage of some terminal illness is not absurd, despite the fact that they will never experience the relief of their suffering, why is the idea of preventing not yet existing beings (ie. reproductive cells) from suffering silly? From a hedonistic utilitarian point of view, the identity of the beings who experience happiness and stress is morally irrelevant, it's the actual experience of happiness or stress that is valuable or dis-valuable. Anti-natalists simply want to prevent more aversive experience from coming into existence.

I agree that negative utilitarians must acknowledge that happiness is valuable if for no other reason than the fact that it is antithetical to stress but that's different from claiming that happiness has intrinsic, positive value. Given the option of a universe where every sentient being was in a constant state of unimaginable joy and euphoria (no distress whatsoever) and a universe with no sentient beings, I would much prefer the former and I don't see why anyone would not. If distress is of intrinsic, negative value than happiness has to be of intrinsic, positive value because happiness (and not just the absence of distress) is the opposite of distress and there is a positive to every negative. What does the word 'utilitarianism' mean? According to wikipedia, a 'utilitarian' is someone who believes that the moral worth of an action is determined solely by it's usefulness in maximizing utility but that doesn't mean that 'positive' utilitarians deny that negative utility is, just that, of negative value, it just means that their focus is on maximizing positive utility. What about the word 'negative utilitarian' entails a belief that happiness is not of positive value? The word simply implies that negative utilitarians are focused on minimizing negative utility rather than maximizing positive utility, not that they literally deny that the opposite of negative utility is positive utility.

I think I've finally figured out the difference between negative utilitarianism and prioritarianism.

-will edit

edit : I hate my computer

Ubuntu
 
Posts: 162
Joined: Tue Sep 07, 2010 1:30 am

Re: Utilitarianism/consequentialism

Postby faithlessgod on 2010-11-04T09:28:00

Hi guys

An interesting thread. Someone said "As far as I know everyone here disagrees with the averaging method (which you could in theory apply to any type of consequentialism), though you might find the more popular summing method equally repugnant."
Well I am a consequentialist but the averaging method, as I understand it, does not apply to my approach. In order for it to apply there would have to be some form of intrinsic value to apply it to but since I have seen no strong,valid and sound argument in favour of any form of intrinsic value (e.g. Mackie's argument from queerness), which leads me to reject this and hence any form of utility, however distributed (utilitarianism, egoism, altruism, prioritorism, egalitarianism etc.). The point being that the averaging or totalising method can only apply of there is a utility to average or total. That does not prevent me being a consequentialist (all value is extrinsic coupled with value pluralism, specifically where these are not commensurable nor reducible to one single dimension, scale ... or utility). There are other consequences apart from utility.
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: Utilitarianism/consequentialism

Postby DanielLC on 2010-11-05T01:21:00

Would you agree that if A is better than B, and B is better than C, than A is better than C? If so, it is possible to give a value to each possible universe and compare them using those.

I'm not entirely sure what it would mean for the value to be intrinsic. The question isn't meaningless if it isn't. If you had to choose between universe A, universe B, or universe A+B which is both of them combined without any interaction, would you never choose A+B? This would be true if you worried about the average. Would you choose A+B over A for certain values of B, regardless of A? This would be true if you worried about the sum. That said, it's possible for those to be true for more complicated value systems.

Even the utilitarian Peter Singer, however, thinks it an absurd notion to want to 'save' non-sentient entities into becoming living beings.


So you found a philosopher that said that. There are plenty that disagree with that sentiment, for example: DanielLC and Ubuntu.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitarianism/consequentialism

Postby Will Crouch on 2010-11-09T10:24:00

In response to the original question, if you want the standard way that philosophers carve the conceptual distinctions, this is an excellent discussion: http://plato.stanford.edu/entries/consequentialism/

Sorry if this is already familiar to everyone. I think it makes vivid, though, that the motivation for utilitarianism is, to a great extent, the belief that a theory is more likely to be right if it is simple and elegant. I think that's why the mathematically inclined are often the most sympathetic to utilitarianism. Utilitarianism answers a lot of different questions at once, and always chooses the simplest answer.

Will Crouch
 
Posts: 4
Joined: Mon Nov 08, 2010 2:58 pm

Re: Utilitarianism/consequentialism

Postby faithlessgod on 2010-11-16T09:51:00

Will and Daniel

Hi Will, you say that "that the motivation for utilitarianism is, to a great extent, the belief that a theory is more likely to be right if it is simple and elegant. I think that's why the mathematically inclined are often the most sympathetic to utilitarianism. Utilitarianism answers a lot of different questions at once, and always chooses the simplest answer."

First I would say this is a motivation for consequentialism, of which utilitarianism(s) was the probably first and certainly the best know variant(s).

Secondly this is only one of the abductive principles to justify a theory and on its own is insufficient. It is rather that, I think most of us here would agree, consequentialism is the inference to the best explanation for normative ethics. Within that area, we can dispute amongst ourselves as to which is the better variant, which is what this forum is for.

And this brings me to Daniel's comment, of which only the first half was addressed to me.

You ask "Would you agree that if A is better than B, and B is better than C, than A is better than C? If so, it is possible to give a value to each possible universe and compare them using those." Again I have two responses.

So what I say. Why? Well whatever I think you and others might think otherwise. That is we can have differential orderings of A.B and C. Because what I value can be different to what you value, that is value is incommensurate and plural. Even if we agree on the same ordering here, we can find some where we would disagree. And I say that the issue is the insistence upon a utility to impose unilaterally on everyone, which is why I reject utilitarianism but not consequentialism - which, to return to my original point - does not make a utility necessary.

And that leads to my second response. It does not matter which universe we prefer, only beings within those universes matter and what matters is only what matters to them. If a being in A prefers B where they do not exist over A, then B is of more value to them than A. Other beings in A might disagree or not.
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: Utilitarianism/consequentialism

Postby DanielLC on 2010-11-16T18:16:00

The idea that different people have different moral systems and no one system is right sounds very nice, but when you actually have to make a decision it doesn't really help. If you don't have a preferred universe, how do you choose which to cause? Do you go with the universe where you go to work, the one where you just lie in bed, or the one where you go on a murderous rampage?
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitarianism/consequentialism

Postby faithlessgod on 2010-11-17T20:28:00

Hi Daniel

"The idea that different people have different moral systems and no one system is right sounds very nice,"
I disagree it does not sound nice and indeed it is quite mistaken.

"but when you actually have to make a decision it doesn't really help."
And what has this to do with the topic at hand?

" If you don't have a preferred universe, how do you choose which to cause? Do you go with the universe where you go to work, the one where you just lie in bed, or the one where you go on a murderous rampage?"
On what basis do you think I do not have a preferred universe, I have never made such an argument. Or are you addressing someone else?
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: Utilitarianism/consequentialism

Postby DanielLC on 2010-11-17T21:37:00

Perhaps I misunderstood you.

You seem to be saying that different people value different things, and that there's no way to combine this into one utility function. If so, replace "moral system" with "utility function" in my response. If not, can you rephrase it?
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitarianism/consequentialism

Postby faithlessgod on 2010-11-18T17:45:00

DanielLC wrote:You seem to be saying that different people value different things,

Correct
and that there's no way to combine this into one utility function.

No, there are quite a variety of ways of combining these into various utility functions. The issue I raised or, at least implied previously, is that there is no rational justification for any of these utilities - none being intrinsic is one major criticism.

If so, replace "moral system" with "utility function" in my response. If not, can you rephrase it?

Why should I or anyone replace "moral system" with "utility function"? You appear to be assuming what you are trying to prove.You cannot presuppose that consequentialism requires a utility, when this is, or has become, the question at hand.

Note I presume it was you who made the original comment that triggered my interest and I quoted in this thread, if not you appear to endorse it?
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: Utilitarianism/consequentialism

Postby DanielLC on 2010-11-18T20:48:00

If there's no one utility function to use, how do you decide what to do?

What I meant was, take my comment and replace the words "moral system" with "utility function".
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitarianism/consequentialism

Postby faithlessgod on 2010-11-23T14:22:00

If there's no one utility function to use, how do you decide what to do?

Like anyone else of course. That is, everyone usually seeks to substitute a more fulfilling state of affairs for a lesser one and they do that by seeking to fulfil the more and stronger of their desires, given their beliefs. Adding "utility functions" is either trivially empty and true or substantively false (if the implication is that everyone should have the same utility function).

What I meant was, take my comment and replace the words "moral system" with "utility function".

Yes I understood that and asked what that is do with morality.

You mean like this?

"The idea that different people have different utility functions and no one system is right sounds very nice, but when you actually have to make a decision it doesn't really help."
But this is no longer looks like a question of morality when I have now made the substitution you suggested. Morality is concerned about normativity, that is what people ought to do, not what they do do. Yet this sentence no longer addresses such issues. The other sentences I have already addressed in previous comments (as i had this one too).

Further It is wrong, since it does help in making decisions. That is it is usually of benefit if you are aware that different people can have different motivations, and the better you are at determining those motivations then the better you are at deciding how to bring successfully about what you desire.
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK


Return to General discussion