I had a few questions on Will's paper. I've only skimmed ~1/3 of it at various parts, so I apologize if these have been answered already.
Could it be actually wrong to be beneficent to drowning children if we took Ayn Rand seriously? (Note: I do not.
) What probability should we assign to Objectivism? How likely is is that Rev. John Furniss was correct that child sinners should be tortured for eternity?
You are going to see again the child about which you read in the Terrible Judgement, that it was condemned to hell. See! It is a pitiful sight. The little child is in this red hot oven. Hear how it screams to come out. See how it turns and twists itself about in the fire. It beats its head against the roof of the oven. It stamps its little feet on the floor of the oven. You can see on the face of this little child what you see on the faces of all in hell - despair, desperate and horrible! (
source)
More generally, how do we go about deciding these probabilities? It can't be based just on our intuitions, because the paper argues that we're biased to be overconfident in our own probability estimates. Rather, we need to use a modesty argument. But how do we know how much modesty to use and with whom? If we counted all humans alive today equally, then Catholics would get unduly high weight because they happen to use less birth control. And it seems that recently deceased people should count too. But how far back do we go? Back to the Pleistocene?
What about likely future people? What about
animals with rudimentary moral views, or even just implicit moral views based on their desire not to suffer? What about alien civilizations?
Paperclippers?
Pebble sorters? Suffering-maximizing minds? What probability should we assign that sadists are a small group of truly enlightened thinkers who see past the stupidity of altruism?
These aren't necessarily refutations of the expected choice-worthiness framework. I'm just genuinely curious how these issues would be resolved. Several of these questions also arise in the epistemic modesty argument, although in that case, the framework seems to me more clear: Other people's beliefs are
just evidence, no different from the result of a blood test. You adjust your hypotheses based on which ones would make it more likely that you see other people believing what they do. What's the corresponding overarching framework for adjudicating moral probabilities? Is it the same idea?
----
Further comments from a Facebook discussion, 31 May 2013.As far as the point about moral uncertainty, the most accurate way to explain my brain's reaction is the following (similar to what I said above).
Yes, it might be that I'm mistaken about introspection and what I would come to believe upon thinking and learning more.
There are many cases where I am genuinely curious to explore other ideas and give them some weight. For example:
* What types of computations are conscious?
* Should the badness of suffering depend on brain size?
* Can any amount of happiness outweigh a day in hell?
However, there are many other cases where I'm not interested in giving weight to other views, and in fact, if my future self changed his mind on these matters, I would regard that as a failure of goal preservation rather than a triumph of enlightenment. For example:
* Safe homosexuality, masturbation, and incest are wrong.
* Organisms now matter more than organisms later.
* It's good to torture kids for eternity when they don't obey religious rules. (
viewtopic.php?t=614#p5516)
Each of these beliefs is held by huge numbers (billions) of people worldwide. Maybe support for the last one is only in the tens/hundreds of millions if you go by people's actual feelings rather than stated dogma.
My main reason for rejecting these seemingly absurd beliefs is overconfidence and the feeling that "I just don't care about being uncertain on these things." That said, it could also be rational in some sense to ignore these possibilities. Entertaining alternate viewpoints carries some risk of adopting them contrary to one's present wishes, because the mind is leaky and hard to control. When you're very sure you don't want to change your mind on something, it makes sense not to change your mind on it.
It's really that simple. If the feeling that you don't want to revise your opinion is stronger than your feeling that you should listen to what a changed version of you would feel, then you don't have to revise your opinion.
The above argument applies for non-realist "failure of introspection" arguments. For the realism argument, the claim is that ignoring these possibilities is making an actual epistemic error rather than just picking how much you want to care about something. I guess my present stance is (a) mostly to say I still don't care enough and ignore it even if it's epistemically irrational but also (b) give a tiny sliver of credence that I'm wrong about the logic of realism and what it implies, but this has little practical impact on my conclusions. If I get pinned into a situation where it seems like I need to revise my views given the tiny probability that realism is true combined with a small update that it would require me to make, I may either make that update or else say I (irrationally) don't care enough.
William: Do the philosophers of moral realism you cited claim (a) only that moral truths exist or also (b) that it's somehow _factually_ incorrect not to care about these moral truths? If it's just (a), then at least I can understand the claim, and I would simply choose not to care about moral truths. If it's (b), I can't understand what this even means, but because I do care a little bit about not being factually incorrect, I would care a little bit about the implications of realism, unless I chose to be irrational by rejecting those implications.
I had a few questions on Will's paper. I've only skimmed ~1/3 of it at various parts, so I apologize if these have been answered already.
Could it be actually wrong to be beneficent to drowning children if we took Ayn Rand seriously? (Note: I do not.
) What probability should we assign to Objectivism? How likely is is that Rev. John Furniss was correct that child sinners should be tortured for eternity?
You are going to see again the child about which you read in the Terrible Judgement, that it was condemned to hell. See! It is a pitiful sight. The little child is in this red hot oven. Hear how it screams to come out. See how it turns and twists itself about in the fire. It beats its head against the roof of the oven. It stamps its little feet on the floor of the oven. You can see on the face of this little child what you see on the faces of all in hell - despair, desperate and horrible! (
source)
More generally, how do we go about deciding these probabilities? It can't be based just on our intuitions, because the paper argues that we're biased to be overconfident in our own probability estimates. Rather, we need to use a modesty argument. But how do we know how much modesty to use and with whom? If we counted all humans alive today equally, then Catholics would get unduly high weight because they happen to use less birth control. And it seems that recently deceased people should count too. But how far back do we go? Back to the Pleistocene?
What about likely future people? What about
animals with rudimentary moral views, or even just implicit moral views based on their desire not to suffer? What about alien civilizations?
Paperclippers?
Pebble sorters? Suffering-maximizing minds? What probability should we assign that sadists are a small group of truly enlightened thinkers who see past the stupidity of altruism?
These aren't necessarily refutations of the expected choice-worthiness framework. I'm just genuinely curious how these issues would be resolved. Several of these questions also arise in the epistemic modesty argument, although in that case, the framework seems to me more clear: Other people's beliefs are
just evidence, no different from the result of a blood test. You adjust your hypotheses based on which ones would make it more likely that you see other people believing what they do. What's the corresponding overarching framework for adjudicating moral probabilities? Is it the same idea?
----
Further comments from a Facebook discussion, 31 May 2013.
As far as the point about moral uncertainty, the most accurate way to explain my brain's reaction is the following (similar to what I said above).
Yes, it might be that I'm mistaken about introspection and what I would come to believe upon thinking and learning more.
There are many cases where I am genuinely curious to explore other ideas and give them some weight. For example:
* What types of computations are conscious?
* Should the badness of suffering depend on brain size?
* Can any amount of happiness outweigh a day in hell?
However, there are many other cases where I'm not interested in giving weight to other views, and in fact, if my future self changed his mind on these matters, I would regard that as a failure of goal preservation rather than a triumph of enlightenment. For example:
* Safe homosexuality, masturbation, and incest are wrong.
* Organisms now matter more than organisms later.
* It's good to torture kids for eternity when they don't obey religious rules. (
viewtopic.php?t=614#p5516)
Each of these beliefs is held by huge numbers (billions) of people worldwide. Maybe support for the last one is only in the tens/hundreds of millions if you go by people's actual feelings rather than stated dogma.
My main reason for rejecting these seemingly absurd beliefs is overconfidence and the feeling that "I just don't care about being uncertain on these things." That said, it could also be rational in some sense to ignore these possibilities. Entertaining alternate viewpoints carries some risk of adopting them contrary to one's present wishes, because the mind is leaky and hard to control. When you're very sure you don't want to change your mind on something, it makes sense not to change your mind on it.
It's really that simple. If the feeling that you don't want to revise your opinion is stronger than your feeling that you should listen to what a changed version of you would feel, then you don't have to revise your opinion.
The above argument applies for non-realist "failure of introspection" arguments. For the realism argument, the claim is that ignoring these possibilities is making an actual epistemic error rather than just picking how much you want to care about something. I guess my present stance is (a) mostly to say I still don't care enough and ignore it even if it's epistemically irrational but also (b) give a tiny sliver of credence that I'm wrong about the logic of realism and what it implies, but this has little practical impact on my conclusions. If I get pinned into a situation where it seems like I need to revise my views given the tiny probability that realism is true combined with a small update that it would require me to make, I may either make that update or else say I (irrationally) don't care enough.
William: Do the philosophers of moral realism you cited claim (a) only that moral truths exist or also (b) that it's somehow _factually_ incorrect not to care about these moral truths? If it's just (a), then at least I can understand the claim, and I would simply choose not to care about moral truths. If it's (b), I can't understand what this even means, but because I do care a little bit about not being factually incorrect, I would care a little bit about the implications of realism, unless I chose to be irrational by rejecting those implications.