Deciding How Altruistic to Be

Whether it's pushpin, poetry or neither, you can discuss it here.

Deciding How Altruistic to Be

Postby Argothair on 2011-11-01T07:48:00

Hi! This is my first post on Felicifia; I apologize if I accidentally violate any community norms.

So, consequentialism seems pretty self-evident to me. To the extent that I pay attention to deontology or virtues or revelations or anything like that it's only because I expect I'll like the consequences of doing so. I figure anyone who says otherwise is just confused. So far, so good.

Altruism seems less obvious. I mean, sure, it's arbitrary for me to privilege my welfare over yours. I'm aware that, viewed objectively, I'm not more special-er than you are -- we're each pretty much the same type of entity, and so it seems wrong, at some level for me to do something that helps me only a little bit and hurts you a lot.

And yet...when I say that it seems wrong to, e.g., smack you upside the head and steal your home-made cookie and eat it myself, I mean that I predict that I will feel slightly guilty afterward. I don't mean that I will earnestly regret it in the sense that, given an identical opportunity, I would take a different course of action. In fact, I know from experience that I do selfish things, feel guilty about them, apologize, meditate on the 'wrongness' of those actions, and then act pretty much the same way the next time an opportunity to be selfish comes along.

I don't mean to glorify selfishness -- I'm not a Rayndian objectivist or anything like that. All else being equal, I think it'd be pretty neat if I suddenly woke up in a self-reflective altruistic equilibrium where I cared a whole hell of a lot about others and I liked myself that way. Right now, though, I don't care very much about others, except for a few people who happen to be close friends or relatives, and, for the most part, I like myself *this* way. I might aspire to a bit of slowly increasing altruism around the edges, and I might feel intermittently guilty that I'm not doing more to help (insects, the third world, the distant future, etc.), but it's unlikely to motivate any big changes in my lifestyle. I listen patiently to my friends' sob stories, donate 5% of my income to Deworm the World, don't cheat at tennis, and call it a day.

This doesn't seem very thoughtful. It feels like an equilibrium that I just happened to land in, without consciously choosing it. For whatever reason, *that* bothers me more than the fact that I'm allowing hundreds of people to die for the sake of my movies and hamburgers and over-sized apartment.

So, at long last, my questions: did you decide how altruistic you wanted to be? When? How did you arrive at your decision? What factors did you consider, and what factors do you think I should consider? Are you comfortable with your decision? Proud of your decision? How will you know if you made an appropriate decision? How did you decide when to stop pondering the question and carry on with the rest of your life?

Note that I am not much interested in stories about how learning new facts about the world made you realize that upholding your pre-existing moral code required a change in your lifestyle or habits. Nor am I much interested in stories about how your attachment to your pre-existing lifestyle made you re-engineer your moral code to match your habits. Rather, I want to know you went about (or how one might go about) making a conscious decision as to what relative weights to assign to your utility, your acquaintances' utility, and your close friends' utility. What sort of data would count as good evidence that a weighting scheme was appropriate? What sort of data might suggest a potentially useful weighting scheme to try out for a while?

Thanks in advance to everyone who read this whole thing. Bonus thanks if you offer a thoughtful comment. :-)

Argothair
 
Posts: 7
Joined: Tue Nov 01, 2011 7:26 am

Re: Deciding How Altruistic to Be

Postby utilitymonster on 2011-11-01T17:18:00

I have thought about this, with views changing over time. My main conclusion is that it is possible to devote considerable resources to promoting the well-being of others without making any significant sacrifice in terms of personal life satisfaction. In fact, I derive considerable satisfaction in knowing that my efforts will make a large positive impact on the world, and I enjoy altruistic activities more than most other activities. The enjoyment is best achieved by working with others toward altruistic aims.

I currently give about 25% of my income away for about a year now. I've been putting it in a donor advised fund. My intention is to give 50% in the future. I spend about 20 hours a week trying to do as much good as possible. I have noticed no decrease in quality of life since I started doing this, though my income has not increased during that time. I have had a considerable increase in life satisfaction following this decision, and created many friendships which I value greatly.

Things that are important to me, where I'm not willing to be optimally altruistic: saving for my retirement, eating out with friends, changing my career to something I would not enjoy, breaking off relationships with people I care about, attending friends' weddings, visiting my family during holiday seasons. Areas where I am willing to sacrifice: consuming lots of alcohol at bars, having a nicer car, eating at expensive restaurants, having an expensive apartment/house, having expensive vacations, eating high-suffering per kg animal products, espousing weird views in public, working on lower-prestige but more important academic projects, choosing not to have children.

Anyway, I basically think that I don't have to make much (if any net) sacrifice here, and it is a huge win altruistically speaking. I think I once thought that this would involve considerable sacrifice, but it was only because of commitment/consistency effects and hyperbolic discounting. I believe these things would be true of many people who might consider making similar choices.

utilitymonster
 
Posts: 54
Joined: Wed Jan 20, 2010 12:57 am

Re: Deciding How Altruistic to Be

Postby DanielLC on 2011-11-01T19:11:00

I decided to be an all-out altruist, at least to the extent that I can overcome my akrasia. I knew this was the choice that made the most logical sense as long as I remember, and after reading a list of paradoxes and fallacies on Wikipedia, and internalizing that logic always wins, I decided that if it makes the most logical sense, it's what I'll do. It doesn't matter if I don't want to.

I am comfortable with my decision. I'm not proud of my decision in the same why that I'm not proud of my belief that the sky is blue. It's obvious. There's nothing to be proud of. I know I made the appropriate decision because there's clearly no reason to value myself over others. Even if there was, I can only do things for future iterations of me, who are not the same person. I am certain enough about my answer that I don't see any need to continue pondering the question.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Deciding How Altruistic to Be

Postby yboris on 2011-11-02T05:29:00

I may be least qualified to speak of my altruism as it may have just begun. I've been a member of Giving What We Can for almost 2 years and so giving 10% of my pre-tax income to the best charity I could find (GiveWell's top recommendations mostly). I quit my Ph.D. program in May 2011 and once I start a full-time job I plan to start giving 50%. Once my loans are paid off I hope to give everything I earn above some amount (maybe $20,000; we'll see how low I can go).

I seem to have been a utilitarian for a long time without any specific starting point; my beliefs crystallized when I read Peter Singer's "Famine, Affluence, and Morality". I'm pretty convinced that his argument should go all the way - give until you become worse off than others.

There are counter-weighing considerations: I'd like to encourage others to give, and if I appear to never spend on things I may be (wrongfully) deemed un-fun and thus not worth emulating even in part. Also considerations about what to endorse with my money: currently I am focusing on foreign-aid charities, in the future I may do something else.

I'm very happy to do what I do now and especially happy about where I'm heading (giving more). I'm not sure I'll ever stop pondering the question (about the appropriateness of my decision), though my re-evaluation of the decision will surely come up less and less frequently as I age. I'm now 26 and have been an altruist only a few years.
User avatar
yboris
 
Posts: 96
Joined: Mon May 30, 2011 4:13 am
Location: Morganville, NJ

Re: Deciding How Altruistic to Be

Postby Hedonic Treader on 2011-11-02T15:01:00

yboris wrote:I'd like to encourage others to give, and if I appear to never spend on things I may be (wrongfully) deemed un-fun and thus not worth emulating even in part.

I think there's another aspect to it: If you buy things that create hedons in your own life, and their production+use doesn't consume overly much energy and limited resources, then you're benefitting yourself and creating jobs at the same time. Furthermore, this has to be addressed as well.

I'm currenty donating a fixed donation goal to New Harvest, but after that, I plan to pre-finance my own future life in order to not rely on the welfare state. I think the concept of giving so much that you later need to ask for help is problematic, especially if charities and/or the welfare state operate inefficiently.

One other constraint on my altruism is that I would not accept a net-negative utility life for myself, even if it benefitted others. Whenever I have the feeling that my own life isn't worth living, I address that first instead. For instance, I wouldn't accept a job if I dreaded the average work week, even if it paid well.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Deciding How Altruistic to Be

Postby Pat on 2011-11-02T21:57:00

I agree with utilitymonster that it's possible to do a lot of good without sacrificing much. Additional income beyond a certain level, which has long been surpassed in developed countries, does little to make people happier. For me, having a purpose in life, being part of this community, and the satisfaction of (perhaps) making the world a better place probably makes up for having less stuff than I otherwise would.
Argothair wrote:I listen patiently to my friends' sob stories, donate 5% of my income to Deworm the World, don't cheat at tennis, and call it a day.

Without knowing the details of your circumstances, I'd say that you could probably donate a bit more than that without any corresponding decrease in your well-being. The five percent of your income that you give is equal to about two years' GDP growth, so it's as though you're enjoying the standard of living you would have done two years ago. Given that wealth has increased dramatically over the past 50 years while self-reported happiness has stayed roughly flat, I'd say the decision to give away most of your discretionary income instead of spending it on yourself is a no-brainer. What organization to give it to, however, is a different story…
Hedonic Treader wrote:I'm currenty donating a fixed donation goal to New Harvest, but after that, I plan to pre-finance my own future life in order to not rely on the welfare state. I think the concept of giving so much that you later need to ask for help is problematic, especially if charities and/or the welfare state operate inefficiently.

Wouldn't the inefficiency of government be a reason to take whatever money you can get from it? If you give money to New Harvest instead of donating it to the government, it seems that you believe that New Harvest makes better use of the money than the government does. So if the government sent you a check, you should redirect it to New Harvest, not send it back to the government. Right?

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska

Re: Deciding How Altruistic to Be

Postby DanielLC on 2011-11-02T22:50:00

If you buy things that create hedons in your own life, and their production+use doesn't consume overly much energy and limited resources, then you're benefitting yourself and creating jobs at the same time.


If you donate to charity, you help the people they help, and create jobs at the same time. If anything, the jobs this way are more important, as they all go to poor countries. Here there's a bunch of stuff that just gives jobs to Americans.

In any case, there isn't really a finite pool of jobs. In the 1800s (or was it 1900s? I forgot) the agriculture industry made up around 50% of the jobs. Now it's less than 1%. This did not translate into 49% unemployment.

Furthermore, this has to be addressed as well.


This is what happens if you don't even try to get a good charity. If you try to find the best, it will be helpful.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Deciding How Altruistic to Be

Postby Argothair on 2011-11-05T08:17:00

Interesting answers, thank you! As might be expected on a site about utilitarianism, most of you responded with some kind of cost-benefit calculation about how altruistic to be -- utilitymonster (and pat) write that one can increase altruism without suffering any personal distress, yboris is careful to limit personal altruism at the point where it would appear so un-fun as to discourage others' altruism, and Hedonic Treader wants to increase personal consumption without decreasing altruism by buying goods that create jobs without destroying the environment.

I agree that there are 'pareto-improving' trades to be found between selfish pleasures and altruistic pleasures. There are actions I could take that would increase both my happiness and others' happiness -- it is not always a trade-off.

Still, there comes a point at which people *do* face a trade-off. At some point, you could give more, but it would hurt you. At some point, you could spend more, but it would harm others. How does one analyze those trade-offs when one *is* forced to choose? DanielLC comes closest to answering this question by saying that altruism is logical, and logic always wins. While interesting, this answer was unsatisfying for me, because I am not sure that logic is relevant here. If I take the position that in the Monty Hall problem, the player can maximize her financial rewards by -not- changing doors, you can use logic to show that I am wrong. You can point to the step in my reasoning that is inaccurate. But if I instead take the position that I will be happiest if I strongly favor my own pleasures relative to those of arbitrary strangers, how can you use logic to attack my position? Perhaps I really do have the kind of psychological makeup that obtains maximum happiness by focusing on selfish pleasures. You might argue that I should strive to change such a makeup -- but that would be a moral argument, not a logical one. It has been a long time since I heard a convincing moral argument.

Argothair
 
Posts: 7
Joined: Tue Nov 01, 2011 7:26 am

Re: Deciding How Altruistic to Be

Postby RyanCarey on 2011-11-05T09:53:00

Hi Argothair, It's an interesting question. As you say, I think DanielLC comes closest to answering in the spirit of your question...

I think selfishness is one type of weakness of will (or akrasia). Weakness of will is the inability to do what you want to do. As a utilitarian, I want maximal happiness. So my weakness of will is the gap between what I do, and what I could do that would maximise utility. Every time I'm tempted towards an icecream, every time I procrastinate, it's an instance of weakness of will. Unless my every neuron is wired in perfect alignment with utilitarianism, weakness of will is something that will remain with me. I think DanielLC's failure to acknowledge this will leave his answer incomplete.

Obviously, we have to come to terms with this. We're not perfect humans and that's ok. This is probably easier in scalar utilitarianism rather than maximising utilitarianism. In scalar utilitarianism, acts are only better or worse, never right or wrong. But there's more to it.

I've always said that utilitarianism simplifies ethics into two steps:
1. Decide what is valuable
2. Maximise that utility.

On this view, once we decide what's important, all that remains is mere logistics. But let's introduce selfishness into this picture. This will complicate things:
1. Decide what is valuable
2a. Try to maximise utility
2b. In the end, you'll increase others happiness somewhat but you'll increase your own happiness more

Or:
1. "
2a. Try to maximise utility
2b. Maximise utility as much as you can given your limitations

Perhaps we can do better:
1. "
2. We are only so altruistic. So let's treat our altruism as a resource that we should 'spend' as cost-effectively as possible.

Here's an example of this type of thinking: Organ donation might be beneficial. But it will surely cause us to suffer a lot. Surely we can suffer less for more benefit by donating a thousand dollars to charity. Surely that would be a better way to spend our altruism.

This view supposes that altruism is a limited resource. Or at least being altruistic once makes it more difficult to be altruistic again later. This is unclear. Perhaps the exact opposite is true: perhaps altruism takes practice. But this seems doubtful. If I can bring myself to donate an organ but I cannot bring myself to donate a thousand dollars to charity, surely it does not matter which "spends" my altruism more effectively. Surely I should just do whatever I can to maximise wellbeing.

You can substitute selfishness for weakness of will and the discussion runs a similar course. Anyway, what do you all think of my approach to the topic?
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Deciding How Altruistic to Be

Postby DanielLC on 2011-11-05T18:14:00

But if I instead take the position that I will be happiest if I strongly favor my own pleasures relative to those of arbitrary strangers, how can you use logic to attack my position?


Easy. There's nothing you can do to help yourself, as all actions pay off in the future, and you are in the present.

If you decide to value all people who are connected to you by a strong worldline (you can connect yourself to anyone with some path of universes, but they largely will have to be crazy low probability), it's a bit harder.

First, I don't see why my happiness is different than another person in any important way. Second, everyone's intuitions are about equally strong evidence. If my intuition is that it's good for me to be happy, and yours is that it's good for you to be happy, there's no clear reason to pick one over the other.

You can substitute selfishness for weakness of will and the discussion runs a similar course


They don't come out quite the same though. For example, I'd have more fun if I spent more time working and spent some money on myself, but it takes me significantly less willpower to refrain from buying stuff than it does to work.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Deciding How Altruistic to Be

Postby Hedonic Treader on 2011-11-06T02:07:00

DanielLC wrote:Easy. There's nothing you can do to help yourself, as all actions pay off in the future, and you are in the present.

I think that argument is essentially correct and spot-on, but our daily intuitions and emotions don't usually align with it easily. You can also arbitrarily decide that you mostly want to care about the wellbeing of the total set of your future copies that have a causal origin in the current you and are still similar enough to you. It's a messy unelegant patch for intuitive egoists, but I don't see an easy logical attack against it.

Also, not everyone shows solidarity with their own future selves either. :)

If you donate to charity, you help the people they help, and create jobs at the same time. If anything, the jobs this way are more important, as they all go to poor countries.

Yes, of course. If the charity is efficient enough in creating utils for resource use, and more so than your personal consumption, this argument holds. For instance, if you donate to New Harvest, you create jobs for researchers, their suppliers, university staff etc.

I'd have more fun if I spent more time working and spent some money on myself, but it takes me significantly less willpower to refrain from buying stuff than it does to work.

Pat wrote:Additional income beyond a certain level, which has long been surpassed in developed countries, does little to make people happier.

The crux here is that you can trade income for time, rather than consumer goods. There is a huge subjective quality of life difference between working 6, 5, or 4 days a week, unless you're a workaholic - which I am not. For me, 5 is "average week not worth experiencing", 6 is strong disutility weeks. I currently feel that I'd rather opt out of existence than work 5 or more days per week in the long run, even though I'm probably a statistical outlier there.

Wouldn't the inefficiency of government be a reason to take whatever money you can get from it? If you give money to New Harvest instead of donating it to the government, it seems that you believe that New Harvest makes better use of the money than the government does. So if the government sent you a check, you should redirect it to New Harvest, not send it back to the government. Right?

Yes, I was more thinking of the bureaucratic overhead and watering down of incentives. It makes intuitive sense to me that people should care for their own baseline financial security first. If the welfare state is stable and general government spending is very inefficient (from the POV of utilitarianism), then your point is correct.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Deciding How Altruistic to Be

Postby utilitymonster on 2011-11-06T18:54:00

Don't know if you saw this Argothair, but I gave you a list of where I draw the line:

Things that are important to me, where I'm not willing to be optimally altruistic: saving for my retirement, eating out with friends, changing my career to something I would not enjoy, breaking off relationships with people I care about, attending friends' weddings, visiting my family during holiday seasons. Areas where I am willing to sacrifice: consuming lots of alcohol at bars, having a nicer car, eating at expensive restaurants, having an expensive apartment/house, having expensive vacations, eating high-suffering per kg animal products, espousing weird views in public, working on lower-prestige but more important academic projects, choosing not to have children.


Maybe you were looking for some kind of general theory of how altruistic to be, rather than concrete examples. I don't really have one of those. As a rule of thumb, I might be following the rule:
    if being altruistic in some domain only requires losing 10% of the personal benefits, and the altruistic benefits are more than 100x the personal benefits, then follow the altruistic policy. If that's not true, then use some intuitive procedure.
Not very principled, but it seems to be working out pretty well for me.

utilitymonster
 
Posts: 54
Joined: Wed Jan 20, 2010 12:57 am

Re: Deciding How Altruistic to Be

Postby Argothair on 2011-11-07T02:20:00

We are only so altruistic. So let's treat our altruism as a resource that we should 'spend' as cost-effectively as possible.


That's an interesting idea. It captures the sense I have of myself as a mental Parliament, with an altruistic superego representing a seriously outnumbered minority party, and a majority coalition with a variety of other parties that will only occasionally take input from the opposition. If you only have a few seats in Parliament, you have to be careful to propose laws that have big, reliable effects and small costs.

On the other hand, it still begs the question of why "I" (meaning the part of me that acts like the Speaker of the House and sets the agenda and sorts out all the incoming traffic of proposals for what to do next) should be rooting for the altruist faction in the first place, rather than just going with the flow and enabling my (internal) median voter.

Maybe you were looking for some kind of general theory of how altruistic to be, rather than concrete examples. I don't really have one of those. As a rule of thumb, I might be following the rule:
if being altruistic in some domain only requires losing 10% of the personal benefits, and the altruistic benefits are more than 100x the personal benefits, then follow the altruistic policy. If that's not true, then use some intuitive procedure.


I liked your concrete examples, and your inductive rule here is perfectly sensible. What fascinates me, though, at both the concrete and the abstract level, is: why? Why give up meals but not vacations? Why give up 10% of the personal benefits but not 20% of the personal benefits? Why refuse to make a sacrifice at 50x the personal benefits but allow a sacrifice at 100x the personal benefits? How do you pick those numbers over some other set of numbers? Assuming you picked them more or less at random and then found that they worked for you, how do you justify them? What is it about your life that moves you to think that you've got a decent set of numbers, rather than moving you to experiment with other numbers?

Easy. There's nothing you can do to help yourself, as all actions pay off in the future, and you are in the present.


I've read some process philosophy, and I'd be happy to agree with you on a literal level, but ultimately I don't think the distinction holds much water. Because of operant conditioning, my present self assigns a large value to the welfare of my future selves. Because of sympathy and mirror neurons, my present self assigns a small but positive value to the welfare of others' future selves. There are different neurological mechanisms at play here, and there's no grounds for equating my future selves with others' future selves simply because neither category is ontologically identical with my present self.

! ( (A != B) ^ (A != C) -> (B = C) ).

I think that argument is essentially correct and spot-on, but our daily intuitions and emotions don't usually align with it easily. You can also arbitrarily decide that you mostly want to care about the wellbeing of the total set of your future copies that have a causal origin in the current you and are still similar enough to you.


That also sounds persuasive.

Argothair
 
Posts: 7
Joined: Tue Nov 01, 2011 7:26 am

Re: Deciding How Altruistic to Be

Postby Pat on 2011-11-27T05:23:00

If you're looking for an argument that impels everybody who hears it to become totally altruistic, you probably won't find it. There aren't any moral arguments "that will stop them in their tracks when they come to take you away." But there are arguments less grand, such as those in Living High and Letting Die, by Peter Unger, and The Limits of Morality, by Shelly Kagan. It took me a few tries to get through Unger's book. The ideas aren't that hard, but his writing is so rambling that I'm afraid the casual reader will be deterred. And a hundred pages into Kagan's book I was totally lost. (That was a couple of years ago—maybe I'll try again.)

The main argument of these books (I think) is that moderately demanding accounts of morality are inconsistent. It's sort of consistent to value only yourself (if you believe that you are a unified self that continues over time), and it's consistent to value everybody equally. But wishy-washy commonsense morality is inconsistent. For example, valuing a somebody's interests above your own when she's nearby, but not when she's far away, entails a contradiction unless you think that physical distance is morally relevant. Your mirror neurons might be turned on when she's close and turned off when she's far, but this isn't relevant unless your account of morality depends on what your mirror neurons are doing. Certainly those neurons are important, but what about the neurons in your frontal lobe (or wherever) that let you do abstract reasoning? And what about the neurons in the head of the person who's suffering? I don't understand how this neuron-talk helps us decide how altruistic to be.

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska

Re: Deciding How Altruistic to Be

Postby DanielLC on 2011-11-27T06:56:00

If you're looking for an argument that impels everybody who hears it ...


That reminds me of the article No Universally Compelling Arguments.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Deciding How Altruistic to Be

Postby Hedonic Treader on 2011-11-27T14:00:00

Pat wrote:The main argument of these books (I think) is that moderately demanding accounts of morality are inconsistent. It's sort of consistent to value only yourself (if you believe that you are a unified self that continues over time), and it's consistent to value everybody equally. But wishy-washy commonsense morality is inconsistent.

I think it's a strategic mistake to expect humans to be consistent. It's a bad model of human motivational psychology to predict them to be consistent. And it's a counterproductive communication strategy to punish them for not being consistent. The result may well be aversive emotional reactions. Peter Singer recognizes this when he suggests people commit to a fairly low standard of altruism, to get them to actually follow through on their commitment. Of course, xkcd knows it, too.

For example, valuing a somebody's interests above your own when she's nearby, but not when she's far away, entails a contradiction unless you think that physical distance is morally relevant.

It could be rational in the same sense in which future discouting can be rational, if distance correlates inversely with predictability of outcomes and effectiveness of interventions. In our increasingly globalized world, this may no longer be as true as it used to be.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Deciding How Altruistic to Be

Postby Brian Tomasik on 2011-11-27T16:30:00

Welcome Argothair.

You mentioned already that there can be many instances where selfishness doesn't conflict (too much) with altruism. I guess the obvious thought is to focus on those areas first. This blog post of mine gives some illustrations where structural changes that just require a bit of thought can outweigh lots of minor willpower dilemmas. Although "You get what you pay for" and "There's no free lunch" are generally valid slogans in efficient markets, there's no general rule like these for the tradeoff between good accomplished and willpower expended. Sometimes easy things can accomplish huge amounts of good.

I find that one of the most effortless ways to modify akrasia dynamics is to change my social environment. Being around people who care about frugality and donating toward reducing animal suffering can make it much easier to feel the same way; indeed, it might be hard not to feel this way. Of course, it can be tricky to find groups of people like this, but keep an eye out for opportunities to become more connected with them. (That's not quite a shameless plug for this site, but I suppose it's pretty close. :))

Best of luck with your altruism!
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Deciding How Altruistic to Be

Postby Brian Tomasik on 2011-11-27T16:56:00

Argothair wrote:and I might feel intermittently guilty that I'm not doing more to help (insects, the third world, the distant future, etc.)

Since when have you cared about insects? :) Few people do, so it's great that made it onto your list!

Argothair wrote:for the sake of my movies and hamburgers

Better hamburgers than chicken wings or fish sticks. ;)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Deciding How Altruistic to Be

Postby Pat on 2011-11-28T00:48:00

Hedonic Treader wrote:I think it's a strategic mistake to expect humans to be consistent. It's a bad model of human motivational psychology to predict them to be consistent.

I agree that it's bad PR to tell people that they're bad unless they're 100% altruistic. Total altruism isn't even a useful or reasonable standard to hold ourselves to. It's not even clear what that would mean. For example, it probably wouldn't include finding a cure for cancer. But what about working 100-hour weeks? That may be just as unachievable for some people. It's not helpful to make yourself feel guilty if you don't do these things. But altruism is still an ideal that we can use to create concrete and achievable rules and goals for our everyday lives.

Hedonic Treader wrote:It could be rational in the same sense in which future discouting can be rational, if distance correlates inversely with predictability of outcomes and effectiveness of interventions.

I should have added "other things being equal." I was thinking of a principle such as "moral responsibility decreases at a rate proportional to the distance between the moral agent and whoever needs help."

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska

Re: Deciding How Altruistic to Be

Postby DanielLC on 2011-11-28T01:40:00

It could be rational in the same sense in which future discouting can be rational, if distance correlates inversely with predictability of outcomes and effectiveness of interventions.


But you already take that into account when you work out expected utility. If you discount with distance, you do count it twice.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm


Return to General discussion