Inspired by conversations with random moral realists and Will Crouch's latest paper...
Moral uncertainty seems mainly to apply to moral realists and certain types of subjectivists/non-cognitivists. So they worry about the consequences of being wrong about their ethical beliefs, and try to take this uncertainty into account.
Some types of non-cognitivists (probably emotivists?) don't care about this kind of stuff. It seems pretty implausible that we can be wrong about our values - so if you assign high credence to a meta-ethical view of this sort, then you should reject the argument for moral uncertainty.
But! Assigning absolute confidence to a meta-ethical theory like the one above seems wrong. You should have some uncertainty. But this leaves us with a problem. Moral uncertainty suddenly starts to affect you. If you assign some confidence to moral realism (hereafter used as a shorthand for: any meta-ethical theory vulnerable to moral uncertainty), then you're forced into taking some account of moral uncertainty in your actions.
There are really two versions where that happens though - a strong and weak one. The strong version assumes the only sort of views that moral uncertainty doesn't apply to are nihilistic ones - views that assume there is no reason to do anything (morally). Therefore, even if you're pretty sure nihilism is true, and pretty sure moral realism isn't - then you should act as if moral realism is entirely true. And therefore take complete account of moral uncertainty.
The weak version assumes that we can allow views which give us some sort of reason to do stuff (morally) - yet are fairly nihilistic/non-realist. For example - I'm pretty sure that emotivism gives us some reason to do things, and it even makes sense to talk about that kind of stuff in a moral sense. I'm not sure how likely this view is, but it seems like if you could say that your confidence in such a theory was pretty high, and the reasons for action as strong as realist ones - then the effect of moral realism (and so uncertainty) is diluted. The exact dilution depends on the credence assigned to realism and the strength of reason for action.
I still don't know enough about meta-ethics obviously, and this is just a half-baked idea. I'd be happy for links to relevant literature.
Thoughts anyone?
(Will's quote explaining this)
Moral uncertainty seems mainly to apply to moral realists and certain types of subjectivists/non-cognitivists. So they worry about the consequences of being wrong about their ethical beliefs, and try to take this uncertainty into account.
Some types of non-cognitivists (probably emotivists?) don't care about this kind of stuff. It seems pretty implausible that we can be wrong about our values - so if you assign high credence to a meta-ethical view of this sort, then you should reject the argument for moral uncertainty.
But! Assigning absolute confidence to a meta-ethical theory like the one above seems wrong. You should have some uncertainty. But this leaves us with a problem. Moral uncertainty suddenly starts to affect you. If you assign some confidence to moral realism (hereafter used as a shorthand for: any meta-ethical theory vulnerable to moral uncertainty), then you're forced into taking some account of moral uncertainty in your actions.
There are really two versions where that happens though - a strong and weak one. The strong version assumes the only sort of views that moral uncertainty doesn't apply to are nihilistic ones - views that assume there is no reason to do anything (morally). Therefore, even if you're pretty sure nihilism is true, and pretty sure moral realism isn't - then you should act as if moral realism is entirely true. And therefore take complete account of moral uncertainty.
The weak version assumes that we can allow views which give us some sort of reason to do stuff (morally) - yet are fairly nihilistic/non-realist. For example - I'm pretty sure that emotivism gives us some reason to do things, and it even makes sense to talk about that kind of stuff in a moral sense. I'm not sure how likely this view is, but it seems like if you could say that your confidence in such a theory was pretty high, and the reasons for action as strong as realist ones - then the effect of moral realism (and so uncertainty) is diluted. The exact dilution depends on the credence assigned to realism and the strength of reason for action.
I still don't know enough about meta-ethics obviously, and this is just a half-baked idea. I'd be happy for links to relevant literature.
Thoughts anyone?
(Will's quote explaining this)
I’ll assume a meta-ethical view compatible with the existence [of] moral uncertainty. As has been convincingly argued by Andrew Sepielli, this assumption only rules out a very small number of moral views. I’m also going to assume that nihilism is false: so all the probability judgments that I discuss are conditional on there existing positive moral facts.