My version of consequentialism

Whether it's pushpin, poetry or neither, you can discuss it here.

My version of consequentialism

Postby Mestroyer on 2012-03-25T02:38:00

I just found this board a few weeks ago, after becoming a vegan a few months ago, and I was pretty excited to see people talking rationally about the ethics of using animals (before that, most of the vegans I encountered said things like (not an exact quote): "I just can't believe that all that pain doesn't end up in the food.").

I have been lurking up until now, but I figured I would post my thoughts on ethics so far, in case anyone found them interesting.

First of all, I don't think morality is objective, and by that I mean that morality is not part of the fabric of the universe. It doesn't have any special place metaphysically (I hope I am using that word right because I have never studied metaphysics). I think it is basically software embedded in the minds of agents. Despite this, I put a lot of effort into trying to make my own software consistent. Instead of asking myself "What do I feel is right now?" I try to ask "What would I feel is right, given that I could reflect on it (with an interest in being consistent) for an unlimited amount of time?" The other thing is that though I think every statement like "X is wrong" is an opinion, that does not mean that I think other people with their own codes of morality should be allowed to act on them. This is because I think the statement "You should always respect other people's opinions if they are not factually wrong" is itself an opinion, and it's one I disagree with.

This is an ongoing process. A lot of the specifics are not completely worked out, but here is what I have, as the best explicit description I can make of what I consider right and wrong.

The first part of my philosophy is a modified piece of scalar utilitarianism, which (as I understand it) avoids requiring us to spend every moment doing the most good we can by saying that actions are not right or wrong, they are just better or worse than other actions (Instead of saying the (only) right action is doing the most good you can). I say that actions are right or wrong in degrees, but also that there is a 0-point where an action is neither right or wrong. So things above that are not just better, they are right, and things below that are not just worse, they are wrong.

Actions are scaled as more right or more wrong depending on the amount of good or bad that the actor would (probabilistically) expect them to bring about given that they made the best calculation they could (or in the case of moral situations they can reasonably expect to be of smaller importance than the effort to think them through in great detail, just an amount of effort proportional to what's at stake). The 0-point (on the scale of how right an action is) is the rightness of noninterference, besides to fulfill responsibilities accrued.

Responsibilities can only be accrued by bringing about situations where bad will happen unless you intervene after the action that causes the situation. So if you have a child, you can't let it starve and say "Hey, that is inaction. Don't blame me." because you brought it into the situation where it would be in danger of starving in the first place (being alive).

However, if someone else's child is starving in the street (and you didn't, in some other way, put it in that situation), it is not wrong to just walk by.

How you tell how good or bad a consequence is is by looking the weighted sum of the priority of each individual that is affected multiplied by the detriment they receive, or are prevented from receiving. A detriment is if something happens that goes against a preference they have about what should not happen to themself which is held for selfish reasons, or keeps them from fulfilling a selfish positive preference they could fulfill without negatively affecting others that they could already (before interference) do on their own.

Weighing the importance of preferences of different individuals (of equal priority) is done by looking at which preference is a greater fraction of all of the things they selfishly prefer.

Preferences by one individual about another (or about inanimate objects) are also ignored. Things that are preferred just to get other things are ignored (only "end" preferences count, not "means" ones). In the case of conflicting preferences, all but the most specific is ignored.

Priority is a number between 0 and 1 that is decreased temporarily for intending to do wrong (lasting as long as the intention remains), or decreased more permanently by acting on that intention. It is restored by changing in such a manner that one would not repeat the wrong.



Many of the aspects of this I'm not completely sure about, and have thought of a few alternatives to that I could come to favor some time. I'm pretty comfortable with most of the results of thought experiments I can think of for these principles, including most of the ones I've thought of that are controversial, such as:

"Person A wants to die. They prefer not to live. They have no dependents. So they try to commit suicide. Person B intervenes, preventing them from doing so. Person A is then administered drugs that change their views on the matter, and after that they wish to live," where the conclusion is that what B did was wrong because any benefits A might experience later in life do not outweigh that their preference to die was violated.

"A footbridge runs over a trolley track, on which five people are tied down. A trolley will kill them unless you push a fat man off the bridge in front of it." where the conclusion is that it is right to push him off.

"You come across a person drowning, whom you could easily save." where the conclusion is that it is right to save them, but neither right or wrong to just walk by.

Some of the thought experiments I like the answers to from this philosophy more than utilitarianism's are:

"Person A is despised by N other people. A has done nothing to deserve their hatred. They would all be happy if A was killed. A wishes not to die. The N people cannot be deceived about whether A has died. No one but the N people and A will know about or be affected by what happens." where utilitarianism would conclude that for some large N, it was right to kill A, and my philosophy would conclude that no matter the size of N, the preferences of the crowd are all preferences about what should happen to another person, and thus discounted.

"Person A cannot stop thinking about philosophical questions that cause him great discomfort. If A was lobotomized, he would forget about all of them (and forget that he was lobotomized) and pursue (successfully) things that would bring him pleasure. A lives in isolation and the philosophical conclusions he reaches will never affect anyone else. A wishes not to be lobotomized. A could be lobotomized without his foreseeing it (and thus perhaps suffering from the fear of it) by performing the procedure as he was willingly sedated for what he thought was a different surgery" where ordinary utilitarianism concludes that he should be forcibly lobotomized, and my philosophy (or any kind of preference-based utilitarianism) says he should not be.

"Person A will have as much pleasure in the remainder of his life as he will pain (or, for utilitarianism that weights pleasure and pain differently, whatever ratio is necessary so that they balance out). A wishes not to die in spite of this. A lives in isolation and will not affect anyone else." where utilitarianism says that it is not bad that he be killed unexpectedly, instantly, and painlessly by a sniper, and my philosophy says that it is bad.

Sorry if I have misinterpreted utilitarianism incorrectly in drawing any of these conclusions.

Mestroyer
 
Posts: 5
Joined: Sun Mar 25, 2012 12:00 am

Re: My version of consequentialism

Postby Brian Tomasik on 2012-03-25T10:40:00

Welcome, Mestroyer!

Mestroyer wrote:I just found this board a few weeks ago, after becoming a vegan a few months ago

Cool. :P How did you find Felicifia? Via Google? And what prompted going vegan?

Mestroyer wrote:First of all, I don't think morality is objective, and by that I mean that morality is not part of the fabric of the universe. It doesn't have any special place metaphysically (I hope I am using that word right because I have never studied metaphysics).

Yes, I think that's an appropriate use of the word. (I've never studied metaphysics formally, either.)

Peter Singer said the following (which you might have read):
I am not defending the objectivity of ethics in the traditional sense. Ethical truths are not written into the fabric of the universe: to that extent the subjectivist is correct. If there were no beings with desires or preferences of any kind, nothing would be of value, and ethics would lack all content. On the other hand, once there are beings with desires, there are values that are not only the subjective values of each individual being.


Mestroyer wrote:though I think every statement like "X is wrong" is an opinion, that does not mean that I think other people with their own codes of morality should be allowed to act on them. This is because I think the statement "You should always respect other people's opinions if they are not factually wrong" is itself an opinion, and it's one I disagree with.

Yes. :)

I'm an emotivist as well, though not everyone on Felicifia is.

Mestroyer wrote:However, if someone else's child is starving in the street (and you didn't, in some other way, put it in that situation), it is not wrong to just walk by.

Interesting. I don't share the intuition at the level of ethical theory. In practice, yes, we usually should give more attention to things we're "responsible for" in the traditional sense, but this is because of practical considerations. I don't bake it into the foundations of utilitarianism itself.

I do find the drowning-child analogy compelling, although I think there are other causes that are better to fund than famine relief.

Mestroyer wrote:Priority is a number between 0 and 1 that is decreased temporarily for intending to do wrong (lasting as long as the intention remains), or decreased more permanently by acting on that intention. It is restored by changing in such a manner that one would not repeat the wrong.

Well that's interesting. :) That's one way to justify increased suffering by wrongdoers.

My own feeling is that suffering is equally bad regardless of who experiences it, but that punishment can sometimes be justified on grounds of prevention (or even in one-off cases for rule-utilitarian reasons).

Mestroyer wrote:Sorry if I have misinterpreted utilitarianism incorrectly in drawing any of these conclusions.

Nope, I think you got it right, at least with a naive first glance. One might quibble about what utilitarianism would prescribe in reality, especially with rule-utilitarian considerations at play, but if we enter the antiseptic world of pure thought experiments, then your interpretations are likely correct.

For myself, I actually agree with what utilitarianism says in these instances. (However, I lean toward negative utilitarianism, which wouldn't endorse the N people killing the one person unless doing so prevented suffering by the N people, rather than just giving them happiness.)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: My version of consequentialism

Postby rehoot on 2012-03-25T23:07:00

Mestroyer wrote:where utilitarianism would conclude that for some large N, it was right to kill A, and my philosophy would conclude that no matter the size of N, the preferences of the crowd are all preferences about what should happen to another person, and thus discounted.

Utilitarians who believe in the precision of utilitarian calculus (I'm not one of them) would kill the person or make the person a slave or whatever they wanted. You might want to consider what you would do if that person in question was about to push a button to release poison pain-gas around the entire planet thereby leading to immense suffering followed by death. You might also want to consider what you mean be "A has done nothing to deserve their hatred." Does that mean there is an omniscient being who declares that what person A has done is not bad? If so, then toss consequentialism out the window. There is no rational reason that I can find for people deriving pleasure from football, but they do. There is no rational reason for many types of hatred. I would like everybody to be rational, but humans are not biologically constructed to be judiciously rational.

Your conclusion could be reached in different ways. One way would be to say that you believe that the best way to maximize happiness would be for everybody to respect certain wishes of other people (like the wish to not be killed). Another way would be to say that there is some morality that is "part of the fabric of the universe" (your words), but then you would contradict yourself. You could also argue that people have, for thousands of years, killed or oppressed others based on unwarranted beliefs, and that therefore humans should adopt a cautious approach of avoiding harming others to avoid harming others based on false information or irrational inferences from facts. If so, that would reveal your personal preference for rationality (that others do not accept, perhaps because of ignorance but either way they reject rationality).

Mestroyer wrote:Sorry if I have misinterpreted utilitarianism incorrectly in drawing any of these conclusions.

There is no perfect form of utilitarianism, and your version seems as good as anybody else's.

rehoot
 
Posts: 161
Joined: Wed Dec 15, 2010 7:32 pm

Re: My version of consequentialism

Postby Brian Tomasik on 2012-03-26T03:03:00

It's worth remembering that many of the edge cases where utilitarianism makes people feel uncomfortable don't show up a lot in practice, so when it comes time to do something in the real world, we can agree to disagree and move forward.

The drowning-child argument may be the most important exception, because there are "drowning children" all over the world every minute of the day, but it seems that most people around here work pretty hard to help reduce the suffering of other organisms regardless of their views on that intuition pump. :)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: My version of consequentialism

Postby RyanCarey on 2012-03-26T03:57:00

Welcome Mestroyer! That's my favourite first post I've seen so far, nice one! I'm just going to comment on some of the key bits.

Mestroyer wrote: Instead of asking myself "What do I feel is right now?" I try to ask "What would I feel is right, given that I could reflect on it (with an interest in being consistent) for an unlimited amount of time?"

This is John Rawl's idea of Reflective equilibrium.

Mestroyer wrote:Weighing the importance of preferences of different individuals (of equal priority) is done by looking at which preference is a greater fraction of all of the things they selfishly prefer.

The key part here seems to be that you're only considering 'selfish' preferences. The reason for only considering 'selfish' preferences seems to be that preferences should be considered 'fairly'. It seems to solve Donald Dworkin's Double Counting Objection. I'd be interested to hear if you could elaborate on your thinking re which preferences count and which don't.

Mestroyer wrote:Responsibilities can only be accrued by bringing about situations where bad will happen unless you intervene after the action that causes the situation. So if you have a child, you can't let it starve and say "Hey, that is inaction. Don't blame me." because you brought it into the situation where it would be in danger of starving in the first place (being alive).

I agree with Alan that the shallow pond analogy seems compelling. I listened to a recent talk by Peter Singer on the demands of morality where he presented a chapter from his upcoming book. If you're interested in having your views on this topic challenged, I would suggest you try to grab a copy, maybe later this year.

Mestroyer wrote:"Person A is despised by N other people. A has done nothing to deserve their hatred. They would all be happy if A was killed. A wishes not to die. The N people cannot be deceived about whether A has died. No one but the N people and A will know about or be affected by what happens." where utilitarianism would conclude that for some large N, it was right to kill A, and my philosophy would conclude that no matter the size of N, the preferences of the crowd are all preferences about what should happen to another person, and thus discounted.

"Person A cannot stop thinking about philosophical questions that cause him great discomfort. If A was lobotomized, he would forget about all of them (and forget that he was lobotomized) and pursue (successfully) things that would bring him pleasure. A lives in isolation and the philosophical conclusions he reaches will never affect anyone else. A wishes not to be lobotomized. A could be lobotomized without his foreseeing it (and thus perhaps suffering from the fear of it) by performing the procedure as he was willingly sedated for what he thought was a different surgery" where ordinary utilitarianism concludes that he should be forcibly lobotomized, and my philosophy (or any kind of preference-based utilitarianism) says he should not be.

"Person A will have as much pleasure in the remainder of his life as he will pain (or, for utilitarianism that weights pleasure and pain differently, whatever ratio is necessary so that they balance out). A wishes not to die in spite of this. A lives in isolation and will not affect anyone else." where utilitarianism says that it is not bad that he be killed unexpectedly, instantly, and painlessly by a sniper, and my philosophy says that it is bad.


In example 1, the answer that it is probably not right to kill A because this will cultivate discriminatory, hateful behaviour that will cause great future harm.
In example 2, the answer is that it is probably not right to lobotomise A because the doctor is entrenching a harmful habit of betraying his patients’ trust. Although A lives in isolation, his doctor still may be found out for performing this lobotomy. If A’s doctor is arrested, he will be imprisoned and lose his medical licence. If A’s doctor is not arrested, he will still endure a lifetime of guilt.
In example 3, the two main details are not described. 1. Who is the sniper? What are the consequences of this act for the sniper and the people who interact with him? 2. We need to provide a textured description of the life of Person A. It is very easy to say that someone’s life has equal parts of pain and suffering but it is much more difficult to imagine it. Though we might say that we are all talking about Person A, we are probably thinking of things that greatly differ in value.

Of course, I didn’t give any answers in the spirit of the examples provided. Ultimately, if you tweaked the examples, I would be forced to answer in the typical utilitarian fashion for all of them. The mob should lynch the individual, the doctor should lobotomise person A and the sniper should execute the innocent. But with each additional item of description, the absurdity of the thought experiment would be more and more apparent, and its relevance to the real world will dissolve. You can pin the utilitarian down to an answer, or you can provide a thought experiment relevant to the real world, but not both. R M Hare writes about this. What is happening is that the anti-utilitarian oversimplifies the utilitarian approach and then criticises the results. The key to answering these thought experiments is taking a Global Consequentialist approach, as per Toby Ord. Global consequentialism is a position on the Rule utilitarianism vs act utilitarianism dispute. It’s my favourite idea in consequentialism. (And you already know about scalar consequentialism, which is my second favourite, so I figure you might be interested!)

If you've got all the way through, thanks. Welcome to the site!
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: My version of consequentialism

Postby Mestroyer on 2012-03-26T10:49:00

Alan Dawrst wrote:Cool. :P How did you find Felicifia? Via Google? And what prompted going vegan?


I followed a link from http://www.utilitarian-essays.com/
which I got to by following a link in a comment on a post on http://measureofdoubt.com/
which I got to from a youtube video by one of the authors of that blog.

I went vegan because I realized just how much animals kept in farms suffer. I was just a vegetarian for a while, but I eventually realized that this wasn't really defensible (in terms of my own ideas, as I settled on them) because the animals probably cared a lot more about not suffering than about not dying, which is a result of buying not only meat, but dairy and eggs as well.

I think my objection to the idea of giving humans more priority than anything else just for being human may have something to do with all of the sci-fi and fantasy I have read, wherein there are a lot of non-human characters portrayed sympathetically, and sometimes even human speciesists (in the sense of thinking of other people of human-like species as inferior) being portrayed unsympathetically.

Alan Dawrst wrote:Peter Singer said the following (which you might have read):
I am not defending the objectivity of ethics in the traditional sense. Ethical truths are not written into the fabric of the universe: to that extent the subjectivist is correct. If there were no beings with desires or preferences of any kind, nothing would be of value, and ethics would lack all content. On the other hand, once there are beings with desires, there are values that are not only the subjective values of each individual being.


It's funny, I actually read just this snippet yesterday when I looked through the amazon preview of "How are we to live?" I remember he said something like it's more broad-minded (I don't remember if that's the exact word he used) to care about everyone instead of just to care about yourself.

I remember having similar thoughts a while ago, but I didn't care enough about broad-mindedness for it to motivate me to do much.

Alan Dawrst wrote:
Mestroyer wrote:However, if someone else's child is starving in the street (and you didn't, in some other way, put it in that situation), it is not wrong to just walk by.

Interesting. I don't share the intuition at the level of ethical theory. In practice, yes, we usually should give more attention to things we're "responsible for" in the traditional sense, but this is because of practical considerations. I don't bake it into the foundations of utilitarianism itself.

I do find the drowning-child analogy compelling, although I think there are other causes that are better to fund than famine relief.

I agree that it's a good analogy, but it my case it seems to have backfired.
My feeling that someone is not doing wrong by not helping distant strangers was stronger than my original feeling that they were bad for leaving the drowning child.

Alan Dawrst wrote:
Mestroyer wrote:Priority is a number between 0 and 1 that is decreased temporarily for intending to do wrong (lasting as long as the intention remains), or decreased more permanently by acting on that intention. It is restored by changing in such a manner that one would not repeat the wrong.

Well that's interesting. :) That's one way to justify increased suffering by wrongdoers.

My own feeling is that suffering is equally bad regardless of who experiences it, but that punishment can sometimes be justified on grounds of prevention (or even in one-off cases for rule-utilitarian reasons).


I don't find rule utilitarianism very agreeable, except as a sort of approximation to aid in calculus, because the best rule (the one that would bring about the most utility) is really "Do whatever an act utilitarian would do," isn't it?

But there are a lot of times when, with no way of making any sound judgement of the consequences of my one action, I will guess that they are probably about 1/M of the consequences of M people doing it, where M is a big enough number that I can guess what the consequences are.

Alan Dawrst wrote:For myself, I actually agree with what utilitarianism says in these instances. (However, I lean toward negative utilitarianism, which wouldn't endorse the N people killing the one person unless doing so prevented suffering by the N people, rather than just giving them happiness.)


I have thought about negative utilitarianism, but I don't like how it seems to say that if a person (living in isolation) will be enormously happy throughout their life, but for a brief headache one day, and you can kill them with no consequences to anyone else but them, without them forseeing it or feeling it, you should do so.

I'll take a look at the other replies when I get a chance (hopefully tomorrow)

Mestroyer
 
Posts: 5
Joined: Sun Mar 25, 2012 12:00 am

Re: My version of consequentialism

Postby Brian Tomasik on 2012-03-26T11:40:00

Mestroyer wrote:I followed a link from http://www.utilitarian-essays.com/
which I got to by following a link in a comment on a post on http://measureofdoubt.com/
which I got to from a youtube video by one of the authors of that blog.

Awesome. Yeah, Julia has started doing several YouTube video blogs. :)

Mestroyer wrote:which is a result of buying not only meat, but dairy and eggs as well.

Yep. That said, there's a huge difference between dairy and eggs in terms of suffering per kg of food. I try hard to avoid eggs but am more relaxed about milk-based products.

Mestroyer wrote:may have something to do with all of the sci-fi and fantasy I have read

There does seem to be a close link between sci fi and anti-speciesism. Often, sci fi stories talk about "the welfare of all sentient beings" and such.

Mestroyer wrote:It's funny, I actually read just this snippet yesterday when I looked through the amazon preview of "How are we to live?"

In other words, this was a coincidence? That's impressive.

Mestroyer wrote:I don't find rule utilitarianism very agreeable, except as a sort of approximation to aid in calculus, because the best rule (the one that would bring about the most utility) is really "Do whatever an act utilitarian would do," isn't it?

I used to think so, but I changed my mind upon reading about Newcomb's problem, timeless decision theory, and credible threats in game theory. Parfit's hitchhiker is a nice example to grease the wheels of thinking about this topic. I believe Toby Ord's "global consequentialism" paper that RyanCarey cited also addresses this, though I haven't read more than the abstract of the piece. ;)

Mestroyer wrote:I have thought about negative utilitarianism, but I don't like how it seems to say that if a person (living in isolation) will be enormously happy throughout their life, but for a brief headache one day, and you can kill them with no consequences to anyone else but them, without them forseeing it or feeling it, you should do so.

I'm probably not a real negative utilitarian. I waffle on my exact position, but on most days what I say is that I'm an ordinary total hedonistic utilitarian with a very extreme pleasure-pain exchange rate (e.g., it would take maybe 10 trillion years of eating yummy potato chips to equal one minute of burning at the stake). Also, I don't think small amounts of suffering are very bad (e.g., pinpricks, headaches, stubbing your toe, cutting your finger, etc.). I only start getting extreme when we're talking about really traumatic forms of pain (e.g., drowning, being swallowed alive by a snake, impaling, drawing and quartering, etc.).
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: My version of consequentialism

Postby Arepo on 2012-03-26T13:54:00

Mestroyer wrote:I don't find rule utilitarianism very agreeable, except as a sort of approximation to aid in calculus, because the best rule (the one that would bring about the most utility) is really "Do whatever an act utilitarian would do," isn't it?


I think the confusion lies (as is so often the case in philosophical problems) in definitions - of 'act'. Usually criticisms leveled at AU suppose quite a restrictive definition, along the lines of 'decision that you make and carry through after conscious deliberation of at least one alternative'. I see no reason to think that the few people I've read who actually self-identify as act-utilitarians (eg JJC Smart) would use this kind of definition, though. So I don't think it's a meaningful question whether act util is subject to such criticism. Rule util does seem more widely defined, since it was conceived as a fix for the supposed shortcomings of act util, so IMO it was always off to a bad start by trying to fix something that wasn't necessarily broken.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: My version of consequentialism

Postby Mestroyer on 2012-03-27T08:02:00

rehoot wrote:
Mestroyer wrote:where utilitarianism would conclude that for some large N, it was right to kill A, and my philosophy would conclude that no matter the size of N, the preferences of the crowd are all preferences about what should happen to another person, and thus discounted.

Utilitarians who believe in the precision of utilitarian calculus (I'm not one of them) would kill the person or make the person a slave or whatever they wanted. You might want to consider what you would do if that person in question was about to push a button to release poison pain-gas around the entire planet thereby leading to immense suffering followed by death. You might also want to consider what you mean be "A has done nothing to deserve their hatred." Does that mean there is an omniscient being who declares that what person A has done is not bad? If so, then toss consequentialism out the window. There is no rational reason that I can find for people deriving pleasure from football, but they do. There is no rational reason for many types of hatred. I would like everybody to be rational, but humans are not biologically constructed to be judiciously rational.

Your conclusion could be reached in different ways. One way would be to say that you believe that the best way to maximize happiness would be for everybody to respect certain wishes of other people (like the wish to not be killed). Another way would be to say that there is some morality that is "part of the fabric of the universe" (your words), but then you would contradict yourself. You could also argue that people have, for thousands of years, killed or oppressed others based on unwarranted beliefs, and that therefore humans should adopt a cautious approach of avoiding harming others to avoid harming others based on false information or irrational inferences from facts. If so, that would reveal your personal preference for rationality (that others do not accept, perhaps because of ignorance but either way they reject rationality).

Mestroyer wrote:Sorry if I have misinterpreted utilitarianism incorrectly in drawing any of these conclusions.

There is no perfect form of utilitarianism, and your version seems as good as anybody else's.


If he was about to inflict pain on a whole bunch of people, that would be a different story. Their desire not to feel pain is selfish and therefore significant. In that case I would say he should be killed. When I said A had done nothing to deserve their hatred, I was emphasizing that their wish for A to die was completely arbitrary. No omniscient being is necessary. If it is posited in the thought experiment, we can assume it to be certain. In real life, an abundance of evidence can approximate certainty well enough.

Alan Dawrst wrote:It's worth remembering that many of the edge cases where utilitarianism makes people feel uncomfortable don't show up a lot in practice, so when it comes time to do something in the real world, we can agree to disagree and move forward.

The drowning-child argument may be the most important exception, because there are "drowning children" all over the world every minute of the day, but it seems that most people around here work pretty hard to help reduce the suffering of other organisms regardless of their views on that intuition pump. :)


I agree. I want to point out that though I think it is not wrong to let a child drown, I think it is right (in the sense of being supererogatory) to save them. I have not donated any significant amount of my resources to saving dying children, but maybe I will once I finish with school and have more money to spend.

RyanCarey,
I'm not sure exactly what Dworkin means by external and personal preferences, but I agree with his conclusions about what is an important preference and what is not.

The main thing I want to assure by discounting preferences about another individual is that the amount of the good that is defined by what happens to an individual is always in line with what they themselves wish.

The point of excluding things that are prefered just as means to get other things is so that I can answer "yes" to thought experiments like:
"You see a person opening a box that they think contains cake. You know that it is actually full of anthrax. They wish to eat cake but wish not to be exposed to anthrax. You are able to forcibly prevent them from opening the box but you cannot convince them that it is dangerous. They insist that they wish to open it. Should you prevent them from opening it?"
The point of excluding desires about inanimate objects is to limit the scope of what events someone can make good by wishing for them.

Most of the rest is inspired by negative utilitarianism, except instead of looking at just suffering and ignoring happiness, it looks at just a negative change in (preference based) utility on a personal scale. So basically, making someone worse off then they were already is bad, and stopping a course of events that would have done that is good, but giving someone satisfaction of positive preferences that wouldn't already be satisfied is not good (nor bad).

You have recommended a lot of reading. I bookmarked the essays, but probably won't be able to get to until at least this next weekend. I have been thinking about reading some Peter Singer stuff, but I won't have time to read a book in the near future (and if I do I will be too mentally exhausted from school to want to do much but lounge around and play video games). So yeah, maybe later this year.

Arepo,
that's interesting. What is a better definition of "act" then?

Mestroyer
 
Posts: 5
Joined: Sun Mar 25, 2012 12:00 am

Re: My version of consequentialism

Postby Brian Tomasik on 2012-03-27T10:14:00

Mestroyer wrote:I have not donated any significant amount of my resources to saving dying children, but maybe I will once I finish with school and have more money to spend.

I put "drowning children" in quotes because I actually prefer to donate toward animals. However, others on this forum have different choices for favorite charities.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: My version of consequentialism

Postby RyanCarey on 2012-03-27T10:20:00

Mestroyer, I feel bad for being 'that guy who just posts long essays', but it's not usually in my nature! I'm just trying to help you take the next step with your ideas, and since you've expressed a pretty diverse range of ideas in this post, I responded accordingly!

I'd say, as per Toby's Global consequentialism, that we should not 'act' so as to make the world happiness, but rather that the world should just 'be' happier, no matter who or what makes it that way. When you take this point of view, we can evaluate which human rights we ought to respect, which rules we ought to enforce, who we ought to praise, which habits we ought to enforce, etc. So anything at all ought to be evaluated. I've not yet met anyone to whom I've explained global utilitarianism who continued to believe in act or rule. It just seems obvious.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: My version of consequentialism

Postby Arepo on 2012-03-27T11:34:00

I basically share Eliezer Yudkowsky's view on this. The questions of 1) whether act util necessarily implies counterproductive behaviour and b) if it doesn't (necessarily), what the most appropriate definition are aren't really questions until you've broken them down a lot further. Unlike Yudkowsky I don't think one has much cause to spend any further time on them once you've realised this.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: My version of consequentialism

Postby rehoot on 2012-03-27T21:49:00

Mestroyer wrote:Most of the rest is inspired by negative utilitarianism, except instead of looking at just suffering and ignoring happiness, it looks at just a negative change in (preference based) utility on a personal scale. So basically, making someone worse off then they were already is bad, and stopping a course of events that would have done that is good, but giving someone satisfaction of positive preferences that wouldn't already be satisfied is not good (nor bad).


I am mostly leaning toward negative utilitarianism. You'll find some negative utilitarians in this forum and some positive. I am generally more certain that, on a large scale, removing suffering would produce more benefit for more people than adding some degree of happiness in a world where the suffering continues to exist. I use this as a personal ethic and a political philosophy. I often find it difficult to determine the best public policy that requires taking from one person to give to another.

rehoot
 
Posts: 161
Joined: Wed Dec 15, 2010 7:32 pm

Re: My version of consequentialism

Postby Hedonic Treader on 2012-05-27T12:33:00

rehoot wrote:I am mostly leaning toward negative utilitarianism. You'll find some negative utilitarians in this forum and some positive. I am generally more certain that, on a large scale, removing suffering would produce more benefit for more people than adding some degree of happiness in a world where the suffering continues to exist. I use this as a personal ethic and a political philosophy. I often find it difficult to determine the best public policy that requires taking from one person to give to another.

Bad is stronger than good for evolutionary reasons. People compete with each other over limited resources and reproductive options and have associated hard-wired psychological adaptations. Therefore, public policies that require making someone suffer for some benefit to others are generally to be distrusted. However, there is a difference between having practical reservations against them and having fundamental philosophical objections against such trade-offs, irregardless of scope. There are two steps that usually convince me I'm not a negative utilitarian:

1) If solipsism were true, and I had a good life with only a modest amount of suffering, I would not kill myself.
2) The self as a consciousness-bearing time-stable unit is an illusion, therefore utilitarianism trumps egoism.

This combination usually gets me. Otoh, I don't see the total distribution of suffering vs. pleasure so far as net-positive. I'm always surprised that people assume it is.

Alan wrote:it would take maybe 10 trillion years of eating yummy potato chips to equal one minute of burning at the stake

I can relate to the intuition that burning at the stake is much worse than eating yummy potato chips is good. But why 10 trillion and not 15 trillion? Are 20 trillion worth two minutes? Considering the numerical mismatch is so huge, and other people give such hugely different answers, it seems unlikely that these numbers are "correct" in any meaningful sense of the word. We are judging this on a gut-level, and the results are very fuzzy.

Which leads to two questions that I'd very much love to see answered:

1) Are we fooling ourselves by assuming there is such a thing as a correct answer? If yes, are there at least answers that are better than others? It can't be completely arbitrary, or else you would have to accept very weird conclusions in "least convenient world" thought experiments.
2) Assuming there is at least an approximation to a correct answer, can it be found as a form of neuroscientific realism? Could someone with superior knowledge of the human brain create a formalism that actually quantifies affect as an objectively measurable phenomenon, like mass or height? Could hedons be an actual scientific unit like kg or joule?
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: My version of consequentialism

Postby DanielLC on 2012-05-27T21:40:00

2) Assuming there is at least an approximation to a correct answer, can it be found as a form of neuroscientific realism? Could someone with superior knowledge of the human brain create a formalism that actually quantifies affect as an objectively measurable phenomenon, like mass or height? Could hedons be an actual scientific unit like kg or joule?


I think pleasure/pain can be measured by operant conditioning. Whenever someone does a certain action, feed them potato chips for a certain amount of time, and burn them at the stake for a certain amount of time. If the frequency of the action doesn't change, the net utility is zero.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: My version of consequentialism

Postby Hedonic Treader on 2012-05-28T08:21:00

DanielLC wrote:I think pleasure/pain can be measured by operant conditioning. Whenever someone does a certain action, feed them potato chips for a certain amount of time, and burn them at the stake for a certain amount of time. If the frequency of the action doesn't change, the net utility is zero.

I can see practical problems with this approach. Especially for deals like potato chips + burning at the stake. But it could make some sense to analyze people's behavior systematically; e.g. are there voluntary activities that lead people to get burns frequently, but that are otherwise fun? Masochism has been mentioned in another thread. Maybe there are dangerous sports activities where the average suffering from injury rate per week roughly amounts to x seconds burning at the stake? Then you could analyze what kind of dangers people accept, and for what reward, and maybe what kind of people (individual differences). If someone knows examples of such analyses, I'd be interested.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: My version of consequentialism

Postby Brian Tomasik on 2012-05-28T12:08:00

Hedonic Treader wrote:Otoh, I don't see the total distribution of suffering vs. pleasure so far as net-positive. I'm always surprised that people assume it is.

Yes, me too. :) I'm probably not a negative utilitarian, and yet I still think the amount of suffering in the multiverse outweighs happiness. This is true even for humanity's future light cone from an expected-value point of view.

Hedonic Treader wrote:I can relate to the intuition that burning at the stake is much worse than eating yummy potato chips is good. But why 10 trillion and not 15 trillion?

It might be 15 trillion. I just picked a number that sounded reasonable. With better calibration, maybe I could have picked a better number.

Hedonic Treader wrote:Are 20 trillion worth two minutes?

Yes. :)

Hedonic Treader wrote:Considering the numerical mismatch is so huge, and other people give such hugely different answers, it seems unlikely that these numbers are "correct" in any meaningful sense of the word. We are judging this on a gut-level, and the results are very fuzzy.

In view of moral non-realism, there is no absolutely "correct" answer, but I think what you mean is just "the answer I would be most happy with upon full reflection and with more knowledge/experience." That's certainly the case, but my estimate is for now better than nothing.

Hedonic Treader wrote:If yes, are there at least answers that are better than others?

Yes, of course. 1 second of eating potato chips vs. 1 minute burning at the stake is not a good deal. And I think 10 trillion years is better than 1 year or 10 years. I'm not extremely concerned if other people don't agree. If I met a suffering-maximizer, I wouldn't thereby fear that I was wrong all along about trying to prevent suffering.

Hedonic Treader wrote:Assuming there is at least an approximation to a correct answer, can it be found as a form of neuroscientific realism? Could someone with superior knowledge of the human brain create a formalism that actually quantifies affect as an objectively measurable phenomenon, like mass or height? Could hedons be an actual scientific unit like kg or joule?

No, I don't think it will ever be that precise, because there will always be lots of judgment calls when determining what counts as consciousness, how much different brain sizes and brain mechanisms count relative to one another, and so on. Science will definitely help us get better intuitions about how to make these judgment calls, but at the end of the day, they will still be judgment calls.

DanielLC wrote:Whenever someone does a certain action, feed them potato chips for a certain amount of time, and burn them at the stake for a certain amount of time. If the frequency of the action doesn't change, the net utility is zero.

I don't think that necessarily works. The obvious problem is time discounting, but even if that were overcome, I would still be troubled because I think a lot of human behavior is hard-wired and doesn't arise from maximizing subjective hedonic experience. Some human brains systems do some extremely utility-harming things, like when people start taking addictive drugs or have unprotected sex with people they don't know. These can't be justified even by time discounting unless you use an insanely high discount rate.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA


Return to General discussion