I was introduced to Utilitarianism and Jeremy Bentham via the great Harvard lecture series on Justice from Michael Sandel:
http://itunes.apple.com/us/itunes-u/jus ... d379064095
While I found all of the moral frameworks interesting, I remained (and remain) unconvinced that any seems to be fully "correct". Recently I ran across this Consequentialism FAQ, where Scott Alexander Siskind mentioned this forum:
http://www.raikoth.net/consequentialism.html#objections
I find his arguments quite convincing, with one caveat - section 7.5. This is also the main counterpoint Sandel provides in his lecture on Utilitarianism.
Forgetting anything else like the Hippocratic Oath or problems with transplant success rate, if a doctor has 5 people needing organs or they'll die, and has one otherwise healthy person with those needed organs, Consequentialism ( / Utilitarianism) seems to compel this doctor to kill that one person to save those other 5 peoples' lives.
Siskind's most non-"weasly" answer to this is:
This seems, to me, also, weasly. This seems to say that if we allow this murder in this case, though it may be just, we are causing less utility overall because the anti-murder heuristic is violated, and this makes an unjust murder more likely in the future by some person or entity unbound by these rules.
This, to me, is wholly unconvincing. If we're truly out to maximize utility, then we must judge based on the (hypothetical) facts at hand, not based on what choice others *might* make in the future. If those future choices are also justly decided using the same principles as the current one, then no principles have been violated.
I apologize if this has been re-hashed over and over and I missed the thread, or if I'm not explaining myself well. I'm no philosopher, just an ordinary guy trying to find answers.
Is there a more convincing answer to this somehow?
http://itunes.apple.com/us/itunes-u/jus ... d379064095
While I found all of the moral frameworks interesting, I remained (and remain) unconvinced that any seems to be fully "correct". Recently I ran across this Consequentialism FAQ, where Scott Alexander Siskind mentioned this forum:
http://www.raikoth.net/consequentialism.html#objections
I find his arguments quite convincing, with one caveat - section 7.5. This is also the main counterpoint Sandel provides in his lecture on Utilitarianism.
Forgetting anything else like the Hippocratic Oath or problems with transplant success rate, if a doctor has 5 people needing organs or they'll die, and has one otherwise healthy person with those needed organs, Consequentialism ( / Utilitarianism) seems to compel this doctor to kill that one person to save those other 5 peoples' lives.
Siskind's most non-"weasly" answer to this is:
But those answers, although true, don't really address the philosophical question here, which is whether you can just go around killing people willy-nilly to save other people's lives. I think that one important consideration here is the heuristic-related one mentioned in 6.3 above: having a rule against killing people is useful, and what any more complicated rule gained in flexibility, it might lose in sacrosanct-ness, making it more likely that immoral people or an immoral government would consider murder to be an option (see David Friedman on Schelling points).
This seems, to me, also, weasly. This seems to say that if we allow this murder in this case, though it may be just, we are causing less utility overall because the anti-murder heuristic is violated, and this makes an unjust murder more likely in the future by some person or entity unbound by these rules.
This, to me, is wholly unconvincing. If we're truly out to maximize utility, then we must judge based on the (hypothetical) facts at hand, not based on what choice others *might* make in the future. If those future choices are also justly decided using the same principles as the current one, then no principles have been violated.
I apologize if this has been re-hashed over and over and I missed the thread, or if I'm not explaining myself well. I'm no philosopher, just an ordinary guy trying to find answers.
Is there a more convincing answer to this somehow?