Here are some of my ideas about meta-ethics for utilitarianism - I appreciate any feedback or arguments against any positions I have taken!
Utilitarianism is, for me, first and foremost a set of personal goals. These goals can be summarized as one goal: to maximize welfare (for prior-existent people/animals). What do I mean by goals? I have utilitarian goals if I desire that welfare is maximized and also decide to increase welfare as much as I can. In other words, such goals have two components: First, I value a specific state of affairs more than another (more welfare over less welfare). Second, I make at least some decisions based on that value – this involves first deliberating about what actions are most likely to maximize welfare, and then choose actions based on that deliberation. I am drawn to these goals for mainly emotional reasons such as empathy, but also because of my emotional and psychological reactions to the logical implications of utilitarianism.
I have these goals whether or not anyone else has them, or even has any reason to have them; morality for me is essentially individual. This doesn’t mean that I don’t want to share these goals with others, both as a way to achieve them and also because I just enjoy sharing my ideas and values. But it does mean that what morality is to other people can be completely different from what morality is to me, and there is no problem.
Utilitarianism and Morality
Is this form of utilitarianism a system of morality? That depends on the definition of “system of morality.” I’m not a moral realist – I don’t feel that the values underlying my utilitarian goals are any sense more “correct” than other values, and I don’t believe that anything is inherently valuable independent of anyone valuing it. Nor do I take any kind of cognitive approach – I don’t think my values can be true or false. A statement about my values can be true or false, but a value isn’t a proposition or a belief about the world – it is ultimately based on a desire. I wouldn’t even say that utilitarian goals are things which rational/fully informed people must have. Logical argument and evidence can help convince someone to be a utilitarian, but logic alone is not enough to make someone adopt utilitarian values or goals.
Replacement Morality
Yet there are a number of important senses in which my utilitarianism can be called an ethical system. First, utilitarianism replaces my existing internal system of morality (which is a system put there by biology, social influence, etc). This means that (a) my adoption of the goals of utilitarianism makes me feel that I do not need to heed certain aspects of my prior moral system, and (b) utilitarianism takes over the mechanisms of that prior system to some degree (ex. moral emotions such as guilt and pride). This replacement is rather incomplete – I can’t easily change my ingrained attitudes and habits of thinking – but in many ways it is quite robust.
So for example, before I had any utilitarian goals I might have felt guilty if I was asked to donate to a random charity and did not, and pride for donating to that charity. I still have those feelings a little bit in this situation, but they are lessened because I know I should only give my money to the most effective charities. Instead, I might feel more guilt now than before I had utilitarian goals when I buy something I don’t need, knowing that the money could have helped children in extreme poverty.
Utilitarian goals also create moral rules which function in the same way moral rules I had before did. Sometimes I might use utilitarianism to guide my decision making without necessarily feeling compelled by the ultimate goal of maximizing happiness. Yet I have a psychological desire to “do the right thing”, and I have decided that moral rules I follow should be determined by which ones maximize happiness.
I don’t think utilitarianism should take on all the systems of the prior morality. For example, if utilitarianism means you feel guilty every time you buy something you don’t need, it is not going to be an easy morality to stick to. Or we might retain existing moral feelings about the wrongness of killing in cold blood, while revising our moral feelings about the difference between killing and letting die in, say, the case of euthanasia.
Finally, a lot of the existing morality can and should stay intact, first because a lot of it overlaps, and second because I may have utilitarian reasons to preserve some of my existing moral beliefs.
Quasi-realism
The second sense in which my utilitarianism can be called an ethical system is that it replaces my existing “public” system of morality, meaning that I make assertions about normative or applied ethics based on utilitarian standards instead of the moral standards I had before. (Of course discretion is important here – in some cases we don’t need to argue for utilitarianism to make an argument for what we value. For example see many of Peter Singer’s arguments
here.)
I find quasi-realism to be an interesting theory which can apply here. Here is a brief explanation of this point of view:
Basically, what quasi-realism means to me is this: We can speak about morality as if we were talking about propositions that can be true or false, even though we are really talking about our own values, which can be neither true nor false. In this sense, we are stating our values, exploring the logical implications of these values, explaining why these logical implications lead us to adopt the moral views we do, and trying to convince others to adopt these views based on these logical implications. So for example, if we agree that killing one to save five is the morally correct thing to do in the trolley problem, then barring other considerations, the principle killing one to save five must be the morally correct principle to use in other situations. You can also find a couple of other short explanations of quasi-realism hereand here.
I wrote a separate post for why I am not a realist, error theorist, or emotivist in the meta-ethics thread.
Utilitarianism is, for me, first and foremost a set of personal goals. These goals can be summarized as one goal: to maximize welfare (for prior-existent people/animals). What do I mean by goals? I have utilitarian goals if I desire that welfare is maximized and also decide to increase welfare as much as I can. In other words, such goals have two components: First, I value a specific state of affairs more than another (more welfare over less welfare). Second, I make at least some decisions based on that value – this involves first deliberating about what actions are most likely to maximize welfare, and then choose actions based on that deliberation. I am drawn to these goals for mainly emotional reasons such as empathy, but also because of my emotional and psychological reactions to the logical implications of utilitarianism.
I have these goals whether or not anyone else has them, or even has any reason to have them; morality for me is essentially individual. This doesn’t mean that I don’t want to share these goals with others, both as a way to achieve them and also because I just enjoy sharing my ideas and values. But it does mean that what morality is to other people can be completely different from what morality is to me, and there is no problem.
Utilitarianism and Morality
Is this form of utilitarianism a system of morality? That depends on the definition of “system of morality.” I’m not a moral realist – I don’t feel that the values underlying my utilitarian goals are any sense more “correct” than other values, and I don’t believe that anything is inherently valuable independent of anyone valuing it. Nor do I take any kind of cognitive approach – I don’t think my values can be true or false. A statement about my values can be true or false, but a value isn’t a proposition or a belief about the world – it is ultimately based on a desire. I wouldn’t even say that utilitarian goals are things which rational/fully informed people must have. Logical argument and evidence can help convince someone to be a utilitarian, but logic alone is not enough to make someone adopt utilitarian values or goals.
Replacement Morality
Yet there are a number of important senses in which my utilitarianism can be called an ethical system. First, utilitarianism replaces my existing internal system of morality (which is a system put there by biology, social influence, etc). This means that (a) my adoption of the goals of utilitarianism makes me feel that I do not need to heed certain aspects of my prior moral system, and (b) utilitarianism takes over the mechanisms of that prior system to some degree (ex. moral emotions such as guilt and pride). This replacement is rather incomplete – I can’t easily change my ingrained attitudes and habits of thinking – but in many ways it is quite robust.
So for example, before I had any utilitarian goals I might have felt guilty if I was asked to donate to a random charity and did not, and pride for donating to that charity. I still have those feelings a little bit in this situation, but they are lessened because I know I should only give my money to the most effective charities. Instead, I might feel more guilt now than before I had utilitarian goals when I buy something I don’t need, knowing that the money could have helped children in extreme poverty.
Utilitarian goals also create moral rules which function in the same way moral rules I had before did. Sometimes I might use utilitarianism to guide my decision making without necessarily feeling compelled by the ultimate goal of maximizing happiness. Yet I have a psychological desire to “do the right thing”, and I have decided that moral rules I follow should be determined by which ones maximize happiness.
I don’t think utilitarianism should take on all the systems of the prior morality. For example, if utilitarianism means you feel guilty every time you buy something you don’t need, it is not going to be an easy morality to stick to. Or we might retain existing moral feelings about the wrongness of killing in cold blood, while revising our moral feelings about the difference between killing and letting die in, say, the case of euthanasia.
Finally, a lot of the existing morality can and should stay intact, first because a lot of it overlaps, and second because I may have utilitarian reasons to preserve some of my existing moral beliefs.
Quasi-realism
The second sense in which my utilitarianism can be called an ethical system is that it replaces my existing “public” system of morality, meaning that I make assertions about normative or applied ethics based on utilitarian standards instead of the moral standards I had before. (Of course discretion is important here – in some cases we don’t need to argue for utilitarianism to make an argument for what we value. For example see many of Peter Singer’s arguments
here.)
I find quasi-realism to be an interesting theory which can apply here. Here is a brief explanation of this point of view:
The quasi-realist says things which sound like what the realist says, but they are to be interpreted differently – in the moral case, as expressions of attitudes, rather than as committing to moral properties. Expressing an attitude requires neither belief in moral properties (realism) or pretense that moral properties exist (making believe that realism is true)… the quasi-realist differs from the realist in adopting a different account of the meaning [of sentences about morality] while continuing to accept those sentences (different content, same – or at least similar – attitude).” (Source)
Basically, what quasi-realism means to me is this: We can speak about morality as if we were talking about propositions that can be true or false, even though we are really talking about our own values, which can be neither true nor false. In this sense, we are stating our values, exploring the logical implications of these values, explaining why these logical implications lead us to adopt the moral views we do, and trying to convince others to adopt these views based on these logical implications. So for example, if we agree that killing one to save five is the morally correct thing to do in the trolley problem, then barring other considerations, the principle killing one to save five must be the morally correct principle to use in other situations. You can also find a couple of other short explanations of quasi-realism hereand here.
I wrote a separate post for why I am not a realist, error theorist, or emotivist in the meta-ethics thread.