The following is as much an effort to put my thoughts on the subject in order as it is an effort to persuade people to at least consider moral realism. I thank the very smart people here at Felicifia for causing my mind to turn to this subject. I'd not given much thought to this particular bit of meta-ethics until recently. So without further adieu...
In Defence of Moral Realism
Moral Realism is defined by Wikipedia as of Saturday, February 8, 2014 as:
"A non-nihilist form of cognitivism and is a meta-ethical view in the tradition of Platonism. In summary, it claims:
So how do we go about proving or disproving these claims?
To do this, I shall first establish some definitions.
What is Morality?
Some people argue that morality is simply a relative standard by which we judge things, as in End-Relational Theory. If true, then this makes morality inherently relative because different people can establish different standards and there is no basis for proving any particular standard as being more correct. I don't subscribe to this view.
Morality in my view is simply, and this a loaded statement I know, "what is right".
What exactly do I mean by this? To say something is right means to imply that it is the correct fact or world state or course of action leading to a world state given all relevant information. For instance, we can say that "1 + 1 = 2" is right because it correctly represents a mathematical relationship. We cannot say that "1 + 1 = 2" is good however. Goodness is a different property than rightness. Rightness simply says that, given all the facts, this is correct.
Rightness is not the same thing as rational. Rationality has to do with finding the best way to achieve one's values and goals. It is quite possible then for rational activity to be immoral.
Rightness is simply the property of being true. If morality is this, it essentially makes 1 and 2 correct by definition.
Morality as Truth
Morality thus is not a subjective standard we apply because we desire it. Rather, morality is a set of prescriptions based on descriptions of reality. It is a set of normative truths that we can infer through a combination of perception, logic and reason. In that sense it is very much like mathematics, and I argue exists in the same realm as mathematics. This essentially makes 3 correct by definition.
Thus, assuming that my definition of morality expresses something that actually exists, rather than just a hypothetical construct of my philosophy, the definition of moral realism is satisfied. Thus, to prove moral realism, I need only show that this definition of morality is, -ahem- true.
What is moral?
So then, what does this definition of morality imply that makes it falsifiable? It implies that morality is something that is grounded in facts. It implies strongly that whatever is moral is not a matter of opinion, but of knowledge, and that the reason why people disagree about morality is that they lack perfect knowledge.
I don't pretend to have perfect knowledge. Thus, any attempt at finding out what morality implies is inherently limited by this lack of knowledge. Nevertheless, lack of knowledge has never been a reason not to attempt to reason with what knowledge we do have. Science is all about figuring out what we can know despite uncertainty.
So what is moral? Something that is moral is fact dependent. Strictly speaking there are only a few facts that we know without question. We know that something exists, that existence is. We know that what some part of what exists has subjective states, that experience is. We know that some subjective states feel different than others, that some are noxious, while others are pleasant. We know that because of the feeling of these states, we discriminate automatically between them, assigning some of them to be positive (or good), and others to be negative (or bad). This is not a preference, but a feature of sentience.
We can, perhaps at the risk of some confusion, refer to these positive and negative valences as absolute values because we have no choice in assigning value to them. It is an automatic, or deterministic mechanical process. These absolute values differ fundamentally from other values that we can choose, and I think much of the confusion over values is in not recognizing this. Absolute values can motivate action and establish desires, but motivation is not by itself moral. The correctness of a desire depends on the consequences of them, whereas the correctness of a feeling depends only on how it feels. Feelings and desires are both facts. But feelings have valences, while desires are either satisfied or not. However, we do not say that desires are positive when they are satisfied and negative when they are not. In fact, the satisfaction of a desire often leads to its annihilation. It is therefore clear that desires exist to serve as means to motivate the achievement of values or goals. They may be good, but not absolutely good. I use absolute instead of intrinsic because it may be possible to hold some outside goods, like a better world, as intrinsically valuable. However, such is a choice that we can make to assign such value, so I consider absolute value as potentially different from intrinsic value.
Given these facts, we can begin to state what is moral. An entity with perfect knowledge would be aware of these facts, and would know what good and bad feelings felt like. As it would know what every entity in this universe felt, it would be able to reason about the truth of these feelings, these absolute values. And the fundamental truth is simply that all entities automatically discriminate or prefer feeling the good over the bad. There is a kind of correctness to feeling good, and incorrectness to feeling bad, that subjects automatically are motivated to act upon.
In a sense, this can be understood by looking at a goal-directed agent. When such an agent reaches its goal state, it is in the correct state. If it fails to do so, than it is in the incorrect state. Sentient beings, have an intrinsic goal state, and it is called happiness. The desires, values, and actions of the agent can be described as correct only in the sense that they contribute to reaching the goal state. Sentient beings could conceivably develop other goal states, such as desired states of the world. But those states would not be about them. A world state could be "correct" to a sentient being, but that could just be a belief, rather than necessarily being a fact about the sentient being. Knowing the actual correct world state depends on perfect knowledge, and is therefore unknowable to the average sentient being. Though, this should not necessarily preclude sentient beings from trying to know as much as possible and trying to create what they think is the "correct" world state.
It can be stated then that the best state is the correct state that an entity -should- be in. That is to say, there is a prescriptive relationship between right and good, that the truth prescribes goodness as being fundamentally correct. Thus all good should be right, though not all right should be good, because it is not the case that all things that are true should be good (to say that 1 + 1 = 2 should be good is silly), but all things that are good should be true (as in, goodness should exist).
An entity with perfect knowledge, if motivated to do what is right, would therefore act to maximize the good for all sentient beings, not because it was feeling benevolent, but because it would be the correct course of action consistent with the truth of knowing what the correct world state, and correct state of all sentient beings, was.
In attempting to be moral, we attempt to achieve this correct world state, rather than just achieving the correct state for ourselves. We choose to take a universal perspective, even without perfect knowledge, and try to approximate what an entity with perfect knowledge would do.
The Problem with Values
Something more should be said about values. Often one of the confusions of moral theory is that it must have something to do with all our values. This confusion I believe stems from the belief that values determine morality, which I believe is actually mistaken.
Non-absolute values are inherently subjective, and are based on our imperfect perceptual knowledge of the outside world. People who's knowledge of the outside world changes often change their values to suit the information they have. To found "morality" on these values is to make "morality" inherently subjective and error prone. Non-absolute values are useful because the fulfillment of these values correlates strongly with positive states, but this is not always the case. Values can be described as good or bad in terms of what consequences holding those values entails. But non-absolute values cannot be described as "absolutely" good or bad or right or wrong.
I will state however something that will likely be controversial, and that is that the correct values are the ones that are most moral. Most people do not have values which are perfectly moral. Rather they either think they do, or they don't care. Nevertheless, some values are closer to moral than others. For instance, I think Utilitarianism is close to moral, but it may not be perfectly moral. I don't pretend to know because I lack perfect knowledge.
Nevertheless, I conjecture that there is a perfect morality because objective truth exists, even if we in our limited nature can only apprehend subjective truth directly, and must infer the qualities of objective truth indirectly.
Thus, the truth is, that I cannot prove that my definition of morality is true. And so I cannot actually prove moral realism. However, I can conjecture my definition of morality as plausible. Thus, moral realism, -could- be true, and unless falsified, presents a legitimate intellectual position to take.
Morality as Computation
The interesting corollary to all of this is that if morality is truly like mathematics, then morality should be computable. Maximizing the good is in effect, a computation that sees maximum goodness as the correct state of the universe. In which case we could calculate a kind of "moral error function" or "moral objective function", and morality can be seen as a kind of optimization problem. This is of course, what the various shades of Utilitarianism have been saying all along.
Anyways, that's my attempt at a defence of moral realism. I apologize if it isn't the most rigorous proof.
In Defence of Moral Realism
Moral Realism is defined by Wikipedia as of Saturday, February 8, 2014 as:
"A non-nihilist form of cognitivism and is a meta-ethical view in the tradition of Platonism. In summary, it claims:
- Ethical sentences express propositions.
- Some such propositions are true.
- Those propositions are made true by objective features of the world, independent of subjective opinion."
So how do we go about proving or disproving these claims?
To do this, I shall first establish some definitions.
What is Morality?
Some people argue that morality is simply a relative standard by which we judge things, as in End-Relational Theory. If true, then this makes morality inherently relative because different people can establish different standards and there is no basis for proving any particular standard as being more correct. I don't subscribe to this view.
Morality in my view is simply, and this a loaded statement I know, "what is right".
What exactly do I mean by this? To say something is right means to imply that it is the correct fact or world state or course of action leading to a world state given all relevant information. For instance, we can say that "1 + 1 = 2" is right because it correctly represents a mathematical relationship. We cannot say that "1 + 1 = 2" is good however. Goodness is a different property than rightness. Rightness simply says that, given all the facts, this is correct.
Rightness is not the same thing as rational. Rationality has to do with finding the best way to achieve one's values and goals. It is quite possible then for rational activity to be immoral.
Rightness is simply the property of being true. If morality is this, it essentially makes 1 and 2 correct by definition.
Morality as Truth
Morality thus is not a subjective standard we apply because we desire it. Rather, morality is a set of prescriptions based on descriptions of reality. It is a set of normative truths that we can infer through a combination of perception, logic and reason. In that sense it is very much like mathematics, and I argue exists in the same realm as mathematics. This essentially makes 3 correct by definition.
Thus, assuming that my definition of morality expresses something that actually exists, rather than just a hypothetical construct of my philosophy, the definition of moral realism is satisfied. Thus, to prove moral realism, I need only show that this definition of morality is, -ahem- true.
What is moral?
So then, what does this definition of morality imply that makes it falsifiable? It implies that morality is something that is grounded in facts. It implies strongly that whatever is moral is not a matter of opinion, but of knowledge, and that the reason why people disagree about morality is that they lack perfect knowledge.
I don't pretend to have perfect knowledge. Thus, any attempt at finding out what morality implies is inherently limited by this lack of knowledge. Nevertheless, lack of knowledge has never been a reason not to attempt to reason with what knowledge we do have. Science is all about figuring out what we can know despite uncertainty.
So what is moral? Something that is moral is fact dependent. Strictly speaking there are only a few facts that we know without question. We know that something exists, that existence is. We know that what some part of what exists has subjective states, that experience is. We know that some subjective states feel different than others, that some are noxious, while others are pleasant. We know that because of the feeling of these states, we discriminate automatically between them, assigning some of them to be positive (or good), and others to be negative (or bad). This is not a preference, but a feature of sentience.
We can, perhaps at the risk of some confusion, refer to these positive and negative valences as absolute values because we have no choice in assigning value to them. It is an automatic, or deterministic mechanical process. These absolute values differ fundamentally from other values that we can choose, and I think much of the confusion over values is in not recognizing this. Absolute values can motivate action and establish desires, but motivation is not by itself moral. The correctness of a desire depends on the consequences of them, whereas the correctness of a feeling depends only on how it feels. Feelings and desires are both facts. But feelings have valences, while desires are either satisfied or not. However, we do not say that desires are positive when they are satisfied and negative when they are not. In fact, the satisfaction of a desire often leads to its annihilation. It is therefore clear that desires exist to serve as means to motivate the achievement of values or goals. They may be good, but not absolutely good. I use absolute instead of intrinsic because it may be possible to hold some outside goods, like a better world, as intrinsically valuable. However, such is a choice that we can make to assign such value, so I consider absolute value as potentially different from intrinsic value.
Given these facts, we can begin to state what is moral. An entity with perfect knowledge would be aware of these facts, and would know what good and bad feelings felt like. As it would know what every entity in this universe felt, it would be able to reason about the truth of these feelings, these absolute values. And the fundamental truth is simply that all entities automatically discriminate or prefer feeling the good over the bad. There is a kind of correctness to feeling good, and incorrectness to feeling bad, that subjects automatically are motivated to act upon.
In a sense, this can be understood by looking at a goal-directed agent. When such an agent reaches its goal state, it is in the correct state. If it fails to do so, than it is in the incorrect state. Sentient beings, have an intrinsic goal state, and it is called happiness. The desires, values, and actions of the agent can be described as correct only in the sense that they contribute to reaching the goal state. Sentient beings could conceivably develop other goal states, such as desired states of the world. But those states would not be about them. A world state could be "correct" to a sentient being, but that could just be a belief, rather than necessarily being a fact about the sentient being. Knowing the actual correct world state depends on perfect knowledge, and is therefore unknowable to the average sentient being. Though, this should not necessarily preclude sentient beings from trying to know as much as possible and trying to create what they think is the "correct" world state.
It can be stated then that the best state is the correct state that an entity -should- be in. That is to say, there is a prescriptive relationship between right and good, that the truth prescribes goodness as being fundamentally correct. Thus all good should be right, though not all right should be good, because it is not the case that all things that are true should be good (to say that 1 + 1 = 2 should be good is silly), but all things that are good should be true (as in, goodness should exist).
An entity with perfect knowledge, if motivated to do what is right, would therefore act to maximize the good for all sentient beings, not because it was feeling benevolent, but because it would be the correct course of action consistent with the truth of knowing what the correct world state, and correct state of all sentient beings, was.
In attempting to be moral, we attempt to achieve this correct world state, rather than just achieving the correct state for ourselves. We choose to take a universal perspective, even without perfect knowledge, and try to approximate what an entity with perfect knowledge would do.
The Problem with Values
Something more should be said about values. Often one of the confusions of moral theory is that it must have something to do with all our values. This confusion I believe stems from the belief that values determine morality, which I believe is actually mistaken.
Non-absolute values are inherently subjective, and are based on our imperfect perceptual knowledge of the outside world. People who's knowledge of the outside world changes often change their values to suit the information they have. To found "morality" on these values is to make "morality" inherently subjective and error prone. Non-absolute values are useful because the fulfillment of these values correlates strongly with positive states, but this is not always the case. Values can be described as good or bad in terms of what consequences holding those values entails. But non-absolute values cannot be described as "absolutely" good or bad or right or wrong.
I will state however something that will likely be controversial, and that is that the correct values are the ones that are most moral. Most people do not have values which are perfectly moral. Rather they either think they do, or they don't care. Nevertheless, some values are closer to moral than others. For instance, I think Utilitarianism is close to moral, but it may not be perfectly moral. I don't pretend to know because I lack perfect knowledge.
Nevertheless, I conjecture that there is a perfect morality because objective truth exists, even if we in our limited nature can only apprehend subjective truth directly, and must infer the qualities of objective truth indirectly.
Thus, the truth is, that I cannot prove that my definition of morality is true. And so I cannot actually prove moral realism. However, I can conjecture my definition of morality as plausible. Thus, moral realism, -could- be true, and unless falsified, presents a legitimate intellectual position to take.
Morality as Computation
The interesting corollary to all of this is that if morality is truly like mathematics, then morality should be computable. Maximizing the good is in effect, a computation that sees maximum goodness as the correct state of the universe. In which case we could calculate a kind of "moral error function" or "moral objective function", and morality can be seen as a kind of optimization problem. This is of course, what the various shades of Utilitarianism have been saying all along.
Anyways, that's my attempt at a defence of moral realism. I apologize if it isn't the most rigorous proof.