Utilitarians : how can I/we counter this argument?

Whether it's pushpin, poetry or neither, you can discuss it here.

Utilitarians : how can I/we counter this argument?

Postby Ubuntu on 2010-10-13T22:25:00

Jim’s last post on Sam Harris addresses a particular example of a more general problem that I see repeating in the skeptical/scientific community. There seems to be a trend among skeptics to endorse very naive version of utilitarianism as though it is not merely a theory about moral value but an objective principle similar to empirical theories. This trend is worrisome because many of the people who are endorsing it do not seem to be aware that they are doing this, or worse, they don’t get why this is a problem. For this reason, I’m going to to take a few minutes to explain why this is a problem, so none of our skeptical readers will make a similar mistake.

The basic assumption of every utilitarian ethical theory is that happiness (the definition varies, of course) is intrinsically valuable. Insofar as the definition of “intrinsic value” is understood in contrast to “instrumental value,” this observation is not controversial. We do not seek happiness as a means to some other end, we seek it as an end in itself. The value of happiness is also universal in the sense that nearly every person seems to value it. But there is a trick in moving from this accurate description of the intrinsic and universal value of happiness to the objective value of happiness that is necessary in order to make utilitarianism into an empirical moral principle.

Here’s the trick: It’s not really happiness qua happiness* that is intrinsically and universally valuable. It’s my happiness that I pursue as an end in itself, and it’s your happiness that you pursue as an end in itself. Utilitarians want to take the empirical fact that we each value our own happiness and derive a prescriptive imperative from it- “we ought to promote happiness universally.” Unfortunately, it just does not follow that simply because I value my own happiness I ought to promote the happiness of others. In order to make that step, the utilitarian must argue that I value happiness itself -not particular manifestations of it- so that my failure to promote universal happiness constitutes a mistake in my moral reasoning. And this argument fails because it is based upon a ludicrous premise: The overwhelming evidence is that we value human happiness selectively and with huge variation of intensity. I may strongly value the happiness of those I love, somewhat value the happiness of those I know, and slightly prefer the happiness of innocent strangers, but this does not mean that I value happiness independently of who manifests it. If I really valued happiness universally, I would easily relinquish the money I spend on personal comforts and comforts for people I loved because that money could make so much more of a difference in the happiness of people I do not know who are starving and suffering somewhere else.

Inevitably, when I point out to a naive utilitarian that his theory does not seem to accurately describe his own moral values, let alone those of others, he will respond by saying something along the lines of, “Yes, but if I were a better person it would.” No doubt, utilitarianism is appealing as a moral theory because it discourages selfishness, clannishness, racism, and all other manner of discriminatory practices. But unfortunately, this is irrelevant to its meta-ethical foundation. If utilitarianism were a truly empirical moral principle, then we wouldn’t have to explain away discrepancies between what we actually value and what we ought to value. Since those discrepancies exist, utilitarianism either hasn’t described the world accurately, or it is a moral postulation no more grounded in empirical science than any other theory of ethics. (Or, both. I think it’s both.) Either way, the utilitarians have failed to bridge the gap between actual moral sentiments (“is”s) and prescriptions about the way we ought to feel/act (“ought”s).

Of course, there is another method of bridging the is/ought that many utilitarians favor as well. It has the advantage of meaningfully distinguishing between empirical descriptions and practical imperatives but with one rather unfortunate caveat: It takes out morality altogether. The move is to say that prescriptive language only refers to prudential advice, not moral imperatives. In other words, the utilitarian would say “you ought to promote universal happiness because that will be likely to promote something you do value (a peaceful world, being seen as a good person, cooperation with others, personal fulfillment, etc.).” This move is problematic for two reasons: First, the premise that acting as a utilitarian is likely to promote personal value-satisfaction will frequently be false (i.e., There are lots of times in which selfishness, or even hurting others, is the best strategy for promoting personal values), and second, and more importantly, it entirely misses the point. As soon as we move from moral oughts to prudential oughts, utilitarianism goes from being an ostensibly defensible theory of moral foundations to a delusional program of self-help. There is no reason to take advice from utilitarians unless it is moral advice, so the move from morality to prudence is just silly.

All of that being said, I don’t want to give the impression that I have some sort of a personal vendetta against utilitarianism. I don’t think it’s absurd to postulate that happiness qua happiness is intrinsically valuable. It’s a perfectly defensible axiom, but it is not derived from empirical observation. This puts utilitarianism in exactly the same meta-ethical position as every other theory of ethics. You can’t bridge the is/ought gap, and the scientists and skeptics who don’t get this need a philosophy lesson.

*In the interest of clarity, the phrase “x qua x” is used to refer to any thing in the capacity or character of itself. So, “happiness qua happiness” means “happiness as itself” in contrast to “happiness for some particular person” or “happiness as it as seen by some particular person.”




http://theappleeaters.wordpress.com/201 ... omment-940

Ubuntu
 
Posts: 162
Joined: Tue Sep 07, 2010 1:30 am

Re: Utilitarians : how can I/we counter this argument?

Postby DanielLC on 2010-10-14T03:51:00

The first part isn't really an argument against Utilitarianism. It's an observation that he never heard a good argument for Utilitarianism. If you want an argument, here's mine:

Each of us intuitively tries to maximize our own happiness. There's no way to tell which of us has the most accurate intuition. The best we can do is guess that they all have the same expected accuracy, and just try to maximize everyones' happiness.

Alternately: I intuitively try to maximize my own happiness, but I recognize there's probably nothing special about me, and any reason to maximize my happiness would apply just as well to everyone else.

The second part is just talking about a misnomer. People are calling it Utilitarianism when it's not. For the record, it's enlightened self-interest.

Also, this probably should have been posted in the Common objections to consequentialism thread. Perhaps someone can move it?
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitarians : how can I/we counter this argument?

Postby RyanCarey on 2010-10-14T07:12:00

The main thrust of this article is that it says that utilitarianism does not correctly describe people's morals. This is not a serious criticism to me because I do not believe utilitarianism is supposed to describe people's morals. It's a moral standard for us to aim for.

Of course there are discrepancies between what we value and what we ought to value. Because there are discrepancies between how we are and how we should be.

Sam Harris's version of utilitarianism may be simpler than classical utilitarianism. However, even it doesn't deserve to be called naive. In essence, what Sam Harris says is that morality cannot be dissociated from wellbeing, and I think this is true. Religious morality "piggybacks" off concern for wellbeing by instructing us that should we follow gods commands, we will go to heaven rather than hell. Justice is desirable because observing injustice makes us feel uneasy, and unwell. Should someone devise a moral precept that was entirely independent of human experience, it would be the least interesting thing in the world. We would be entirely incapable of being interested in such a moral principle because we would be unable to even apprehend it. So, to the extent that a principle as justice is not relevant to our wellbeing, we cannot attend to it, and nor should we! This central argument of Sam Harris, I don't think this article addresses.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Utilitarians : how can I/we counter this argument?

Postby redcarded on 2010-10-27T13:19:00

Well, in every form of ethics murder is bad, it happens everywhere in the world, yet that action doesn't disprove ethics. Utilitarianism is a a system of ethics, not a straight out description of reality, if it was then the world would be a lot happier.
User avatar
redcarded
 
Posts: 41
Joined: Thu Nov 13, 2008 11:34 pm
Location: Canberra, Australia

Re: Utilitarians : how can I/we counter this argument?

Postby Snow Leopard on 2010-11-16T20:38:00

DanielLC wrote: . . . Alternately: I intuitively try to maximize my own happiness, but I recognize there's probably nothing special about me, and any reason to maximize my happiness would apply just as well to everyone else. . .

Yes, we generalize. What feels true emotionally, and what has seemed to "work" on a number of occasions, we generalize intellectually. Or to put it more poetically, from the wisdom of the heart we generalize with the head.

The article does bring up the valid objection that utilitarianism sets an unrealistically high standard.

And the point the article doesn't bring up: Utilitarianism, in most of it's formulations, is clumsy. We "should" be interested. We "should" donate time, money, effort, etc. Most of the time that's going to be dry as dust and feel like an obligation and feel like a burden. A more sophisticated version is going to talk about rolling with current interests, dancing an interest as it were (yes, dancing an interest, lightly playing with it, trying it on in different ways and seeing how it works). And since we humans are social animals (although sometimes you'd wonder how we treat others!), it seems to me a truly sophisticated version would talk about forming loose teams, helping other people, helping other teams, etc.

Snow Leopard
 
Posts: 40
Joined: Tue Nov 16, 2010 8:04 pm

Re: Utilitarians : how can I/we counter this argument?

Postby DanielLC on 2010-11-17T03:29:00

That's not utilitarianism. That's something you'd do to try to be a good utilitarian. Finding a way to get rid of akrasia produces a lot of utility.

As an analogy, consider a paperclip company. They will organize. They'll have managers, engineers, assembly workers, etc. It's to because they're trying to do that. They're trying to make paperclips. It's because doing so is helpful in making paperclips.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitarians : how can I/we counter this argument?

Postby Snow Leopard on 2010-11-18T17:19:00

DanielLC wrote:That's not utilitarianism. That's something you'd do to try to be a good utilitarian. Finding a way to get rid of akrasia produces a lot of utility. . .

I agree that once we're outside the realm of what our goal should be we're outside the realm of utilitarianism proper. But I also think these kind of interdisciplinary approaches are often the most helpful.

PS What is 'akrasia'?

Snow Leopard
 
Posts: 40
Joined: Tue Nov 16, 2010 8:04 pm

Re: Utilitarians : how can I/we counter this argument?

Postby RyanCarey on 2010-11-22T06:21:00

Ok so DanielLC's post, as I interpret it, is one I completely agree with. Allow me to explain it.

Firstly, for context, akrasia is weakness of will. Weakness of will means wanting to do something but not being able to muster up the motivation to do it. We all want to help the third world, but few of us are so self-sacrificing. A way to deal with weakness of will is to try to be as strong-willed as possible, without making oneself feel guilty.

so what I interpret DanielLC to be saying is that utilitarianism only tell us what goal we should aim for: a world with more happiness and less suffering. And this is all utilitarianism should do. This is the purest distillation of utilitarianism. By analogy, a paperclip company aims to make paperclips. Now, just as a paperclip company will do many other things on the way to achieving its purest objective. It'll hire managers, engineers, assembly workers, etc. But that doesn't convince us to say something silly like "the goal of a paperclip company is to hire staff". The goal of a paperclip company is still to make paperlcips, and to attach papers to each other. By analogy, just because utilitarians have to deal with problems like weakness of will (also known as akrasia), doesn't mean that their ultimate goal is to "try their best to improve the world despite weakness of will". Our ultimate goal is still to improve the world. And weakness of will is a fact of the world. So weakness of will is something we will have to deal with in improving the world. But that doesn't mean it should be incorporated into our pure, ultimate aim.

I'm sure even my extended, clarified version of DanielLC's post is a little confusing, but I hope I've helped somewhat!
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia


Return to General discussion