Metaethics: Utilitarianism as a set of personal goals

Whether it's pushpin, poetry or neither, you can discuss it here.

Metaethics: Utilitarianism as a set of personal goals

Postby Brent on 2011-06-11T13:51:00

Here are some of my ideas about meta-ethics for utilitarianism - I appreciate any feedback or arguments against any positions I have taken!

Utilitarianism is, for me, first and foremost a set of personal goals. These goals can be summarized as one goal: to maximize welfare (for prior-existent people/animals). What do I mean by goals? I have utilitarian goals if I desire that welfare is maximized and also decide to increase welfare as much as I can. In other words, such goals have two components: First, I value a specific state of affairs more than another (more welfare over less welfare). Second, I make at least some decisions based on that value – this involves first deliberating about what actions are most likely to maximize welfare, and then choose actions based on that deliberation. I am drawn to these goals for mainly emotional reasons such as empathy, but also because of my emotional and psychological reactions to the logical implications of utilitarianism.

I have these goals whether or not anyone else has them, or even has any reason to have them; morality for me is essentially individual. This doesn’t mean that I don’t want to share these goals with others, both as a way to achieve them and also because I just enjoy sharing my ideas and values. But it does mean that what morality is to other people can be completely different from what morality is to me, and there is no problem.

Utilitarianism and Morality

Is this form of utilitarianism a system of morality? That depends on the definition of “system of morality.” I’m not a moral realist – I don’t feel that the values underlying my utilitarian goals are any sense more “correct” than other values, and I don’t believe that anything is inherently valuable independent of anyone valuing it. Nor do I take any kind of cognitive approach – I don’t think my values can be true or false. A statement about my values can be true or false, but a value isn’t a proposition or a belief about the world – it is ultimately based on a desire. I wouldn’t even say that utilitarian goals are things which rational/fully informed people must have. Logical argument and evidence can help convince someone to be a utilitarian, but logic alone is not enough to make someone adopt utilitarian values or goals.

Replacement Morality

Yet there are a number of important senses in which my utilitarianism can be called an ethical system. First, utilitarianism replaces my existing internal system of morality (which is a system put there by biology, social influence, etc). This means that (a) my adoption of the goals of utilitarianism makes me feel that I do not need to heed certain aspects of my prior moral system, and (b) utilitarianism takes over the mechanisms of that prior system to some degree (ex. moral emotions such as guilt and pride). This replacement is rather incomplete – I can’t easily change my ingrained attitudes and habits of thinking – but in many ways it is quite robust.

So for example, before I had any utilitarian goals I might have felt guilty if I was asked to donate to a random charity and did not, and pride for donating to that charity. I still have those feelings a little bit in this situation, but they are lessened because I know I should only give my money to the most effective charities. Instead, I might feel more guilt now than before I had utilitarian goals when I buy something I don’t need, knowing that the money could have helped children in extreme poverty.

Utilitarian goals also create moral rules which function in the same way moral rules I had before did. Sometimes I might use utilitarianism to guide my decision making without necessarily feeling compelled by the ultimate goal of maximizing happiness. Yet I have a psychological desire to “do the right thing”, and I have decided that moral rules I follow should be determined by which ones maximize happiness.

I don’t think utilitarianism should take on all the systems of the prior morality. For example, if utilitarianism means you feel guilty every time you buy something you don’t need, it is not going to be an easy morality to stick to. Or we might retain existing moral feelings about the wrongness of killing in cold blood, while revising our moral feelings about the difference between killing and letting die in, say, the case of euthanasia.

Finally, a lot of the existing morality can and should stay intact, first because a lot of it overlaps, and second because I may have utilitarian reasons to preserve some of my existing moral beliefs.

Quasi-realism

The second sense in which my utilitarianism can be called an ethical system is that it replaces my existing “public” system of morality, meaning that I make assertions about normative or applied ethics based on utilitarian standards instead of the moral standards I had before. (Of course discretion is important here – in some cases we don’t need to argue for utilitarianism to make an argument for what we value. For example see many of Peter Singer’s arguments
here.)

I find quasi-realism to be an interesting theory which can apply here. Here is a brief explanation of this point of view:

The quasi-realist says things which sound like what the realist says, but they are to be interpreted differently – in the moral case, as expressions of attitudes, rather than as committing to moral properties. Expressing an attitude requires neither belief in moral properties (realism) or pretense that moral properties exist (making believe that realism is true)… the quasi-realist differs from the realist in adopting a different account of the meaning [of sentences about morality] while continuing to accept those sentences (different content, same – or at least similar – attitude).” (Source)


Basically, what quasi-realism means to me is this: We can speak about morality as if we were talking about propositions that can be true or false, even though we are really talking about our own values, which can be neither true nor false. In this sense, we are stating our values, exploring the logical implications of these values, explaining why these logical implications lead us to adopt the moral views we do, and trying to convince others to adopt these views based on these logical implications. So for example, if we agree that killing one to save five is the morally correct thing to do in the trolley problem, then barring other considerations, the principle killing one to save five must be the morally correct principle to use in other situations. You can also find a couple of other short explanations of quasi-realism hereand here.

I wrote a separate post for why I am not a realist, error theorist, or emotivist in the meta-ethics thread.

Brent
 
Posts: 23
Joined: Wed Jun 08, 2011 8:29 pm
Location: Washington, DC

Re: Metaethics: Utilitarianism as a set of personal goals

Postby rehoot on 2011-07-21T19:19:00

Brent said
I have these goals whether or not anyone else has them, or even has any reason to have them.


My goals are similar.

Is this form of utilitarianism a system of morality?


Technically, if your goal is "welfare" then your philosophy might not be utilitarianism in the strict sense, but would fall under the broader category of consequentialism. I am working on a detailed summary of a book by Brad Hooker that seems consistent with this goal: http://felicifia.com/index.php?title=Id ... orld_(book)

I’m not a moral realist


You might want to explore moral realism. I am going to post a question that is related to Sam Harris's book on moral realism--I see many problems with the details in his book, but I also see some possibilities. By reconsidering WHY you believe what you believe, you might find that observations in the physical universe DO explain your beliefs. If so, you would become a moral realist.

I don’t think my values can be true or false


If you asked me two weeks ago, I would have agreed. I think that some of my preferences are arbitrary, but my preference to avoid shooting people with a shotgun is consistent with a scientific observation that shooting people in this way adversely affects their life-span and self-reported quality of life. If I consider secondary and tertiary effects of my actions, it could be argued that setting the example (right action) helps me to objectively support the well-being of both me and my community...

Sometimes I might use utilitarianism to guide my decision making without necessarily feeling compelled by the ultimate goal of maximizing happiness.


Read the Brad Hooker link above to read his allowance for some preference for self and family (read the book to get the full explanation and his detailed justification).

Finally, a lot of the existing morality can and should stay intact, first because a lot of it overlaps, and second because I may have utilitarian reasons to preserve some of my existing moral beliefs.


ditto

For the quasi-realism thing, we'll see what happens after the post that I hope to make today.

rehoot
 
Posts: 161
Joined: Wed Dec 15, 2010 7:32 pm

Re: Metaethics: Utilitarianism as a set of personal goals

Postby Brent on 2011-07-22T18:33:00

rehoot wrote:Technically, if your goal is "welfare" then your philosophy might not be utilitarianism in the strict sense, but would fall under the broader category of consequentialism.


Maybe, but the Stanford Encyclopedia of Philosophy implies that any kind of welfarist consequentialism is usually called utilitarianism:

SEP wrote:When such pluralist versions of consequentialism are not welfarist, some philosophers would not call them utilitarian. However, this usage is not uniform, since even non-welfarist views are sometimes called utilitarian.


If even non-welfarist consequentialisms are sometimes called utilitarian, it makes seems that welfarist consequentialisms can (according to convention) be called utilitarian, even if they are not classical utilitarianism or preference utilitarianism.


I may check out Brad Hooker's book at some point. I'm not initially sympathetic to rule-utilitarianism, and it seems like Hooker's rule utiltiarianism is based on a completely different motivation than my utilitarianism (I'm not saying I shouldn't read his book because of this - it can be useful to read things by people with different opinions). From his book (quote found on the Brad Hooker wikipedia page):

Hooker wrote:…the best argument for rule-consequentialism is not that it derives from an overarching commitment to maximise the good. The best argument for rule-consequentialism is that it does a better job than its rivals of matching and tying together our moral convictions, as well as offering us help with our moral disagreements and uncertainties.


I can see the logic in embracing and/or promoting a moral code which diveges from act utilitarianism if doing so will in the end maximize expected welfare (that is an emprical question). But it would be hard to convince me that human welfare isn't what ultimately matters morally.

rehoot wrote:You might want to explore moral realism. I am going to post a question that is related to Sam Harris's book on moral realism--I see many problems with the details in his book, but I also see some possibilities. By reconsidering WHY you believe what you believe, you might find that observations in the physical universe DO explain your beliefs. If so, you would become a moral realist.


I'll check out your post, but one thing I was trying to emphasize in this post is that utilitarianism is for me a set of goals based on values - so it is not a belief or group of beliefs at all.

rehoot wrote:
Brent wrote:I don’t think my values can be true or false.


If you asked me two weeks ago, I would have agreed. I think that some of my preferences are arbitrary, but my preference to avoid shooting people with a shotgun is consistent with a scientific observation that shooting people in this way adversely affects their life-span and self-reported quality of life.


It is of course empirically true that shooting people hurts or kills them (on a side note, I don't think you need science to know that - it is pretty obvious). But my desire that people aren't hurt or killed is not true or false (or to say it another way, my valuing human wellbeing is not true or false).

Brent wrote:Utilitarian goals also create moral rules which function in the same way moral rules I had before did. Sometimes I might use utilitarianism to guide my decision making without necessarily feeling compelled by the ultimate goal of maximizing happiness. Yet I have a psychological desire to “do the right thing”, and I have decided that moral rules I follow should be determined by which ones maximize happiness.


What I meant here (and I could have been clearer) is that sometimes I might follow utilitarianism as a rule instead of a goal. That is, sometimes my action is not motivated my this goal of maximzing welfare; I am instead thinking about following the utilitarian rule that says you should maximize welfare. In the first case I am motivated by maximizing welfare in itself, and in the second case I am motivated by a desire to do what is right, which happens to be maximizing welfare.

Brent
 
Posts: 23
Joined: Wed Jun 08, 2011 8:29 pm
Location: Washington, DC


Return to General discussion