8 basic principles of ethics and other stuff

Whether it's pushpin, poetry or neither, you can discuss it here.

8 basic principles of ethics and other stuff

Postby Stijn Bruers on 2012-01-15T20:17:00

Hi all,

I just stumbled upon this site. I have sympathies with utilitarianism, altough it my ethical system is a bit more extended. This is my opening post, with the intention to share some of my ideas. The texts I've written in english can be found at
http://stijnbruers.wordpress.com/catego ... ish-texts/

My central text involves 8 ethical principles, the first of them is a version of prioritarianism:
http://stijnbruers.wordpress.com/2011/0 ... of-ethics/

This prioritarianism (quasi-maximin) is discussed in more (mathematical) detail in
http://stijnbruers.wordpress.com/2010/1 ... f-justice/

Furthermore, my main interest is animal equality.
One article show that speciesism is a kind of moral illusion, like optical illusions
http://stijnbruers.wordpress.com/2011/0 ... illusions/

A more complete derivation of my theory of animal equality
http://stijnbruers.wordpress.com/2011/1 ... -equality/

And finally: 10 arguments against speciesism:
http://stijnbruers.wordpress.com/2012/0 ... peciesism/

Sorry about the spam. I hope your well-being did not decrease ;-)

Stijn Bruers
 
Posts: 3
Joined: Sun Jan 15, 2012 8:06 pm

Re: 8 basic principles of ethics and other stuff

Postby rehoot on 2012-01-15T22:34:00

Stign wrote:Sorry about the spam. I hope your well-being did not decrease


It's definitely not SPAM. For some background on this forum... the regulars have different views on the details of ethics, and sometimes the differences are substantial, but most of the regulars extend moral consideration to animals. I have recently been gravitating toward metaphysical naturalism after struggling for a long time about why I believed in things like intrinsic value of life even though I have no scientific evidence that such a thing exists. Although I am skeptical of things like intrinsic value of life and the existence of objective moral principles, I believe that there is a rational basis for respecting animals, and the key points are variations of your points -- with the exception that I disagree on why people should adopt those beliefs.

I read the article on http://stijnbruers.wordpress.com/2011/12/26/towards-a-coherent-theory-of-animal-equality/animal equality. I'll start with a broad perspective:

Your target audience is ultimately people who disagree with you on the moral status of animals, and they do not share your intuitions. Your "Moral intuition 2" is what *you* feel, not what a selfish person feels, and the selfish person will probably see no reason to undertake the training that would be needed to change empathic responses. A person who denies "Moral intuition 2" will then deny "Particular ethical principle 2" and your subsequent argument will fall on deaf ears.

I happen to believe in something very similar to your principles, but I perceive them as a personal ethic and not as objective moral principles. Because my beliefs are not objective, I have no basis for saying that other people should do as I do--HOWEVER...

Here is a super-short idea of my latest philosophy, then I'll discuss the problem of objective morality that underlies your article. My latest philosophy is something like this: There is no objective morality and thereby no objective reason why people *should* be nice to others. I suspect that people who consider what they want from life will be strongly influenced by the first-principles of utilitarianism (happiness) or variations of it (well-being). If they are not interested in these things, there is no moral principle of the universe that will punish them or indicate that they are objectively wrong. Those who are judiciously rational will find it difficult to draw arbitrary ethical distinctions between friends, neighbors, strangers, all humans, all primates, all animals and so forth depending on the details. The result is that they will strive to act according to ethical principles, many of which will be substantially similar to utilitarianism or related consequentialism. I think that rational people would extend consideration to animals for many of the reasons that you stated.

There is a lot of background to that. As of recently, I do not believe that life has intrinsic value even though my empathic responses are consistent with such a view. I do not believe that there are objective moral principles and do not believe that my emotional responses indicate the existence of objective moral principles. When people talk of objective moral principles, they often leave the details vague--but what is it supposed to be? Is there a physical object in the universe that says what is good or bad? Is there an attribute of a physical object that defines what is good or bad or right or wrong? I have no evidence that anything of this sort exists.

I do know that the human body responds to stimuli and that there is strong evidence of a biological basis of emotion (I posted on this a few times). I also know that humans suffer from many cognitive biases and are prone to make errors of inference. I also know that peopel who spend their entire lives studying ethics or religion are convinced in objective moral principles that are directly opposite to those identified by other people who have spent their entire lives studying morality or religion. At the very least, the scientific conclusiont would be that if humans have a secret mechanism for identify objective moral principles (for which there is no evidence) it is unreliable. When people feel bad upon seeing a human or an animal injured, it means that they are experiencing an emotion, not that there is some property of the universe that really exists and is beaming information into their heads so that people can unconsciously know the true nature of the universe without consciously knowing how they detected those properties of the universe (that don't even exist). The ONLY rational conclusion is to free myself from any assertion that objective moral principles exist or that humans are somehow equipped with a reliable mechanism to detect them (i.e., moral intuitions are an illusion if the intuition is that the principle is objective and intersubjectively reliable).

This does not leave people in the dark. If some people accept your idea: "I feel good when my well-being is high" and agree with part of your Intuition 2: "I feel empathy with those who suffer" and conclude that "I want to live in a place where people refrain from making me suffer" and "I want to live in a place where people in GROUP X do not suffer" (where GROUP X might be initially described as "family and friends" and then expand). The rational person now has a dilema: is there any scientific justification for making an ethical distinction between my friends and other people who live in my city? If Hume is correct (and I think he is), there is no way to go from what *is* to *what ought to be* (is ought problem). In many cases, there is no justification for drawing arbitrary ethical distinctions on who should be harmed. The result is that I have to either refrain from harming everybody or be willing to harm anybody. Upon closer inspection, the rational person will probably choose to avoid harming others. There is no moral law that says that people must be rational and nothing in the universe that systematically endows people with the ability to be rational. Some people do not care if their actions reflect logically contradictory desires or principles--I see it as a goal of rational people to help others develop a strong sensitivity to logical contradiction.

There are some situations when there is a scientific basis for an ethical distinction. Horses do not have the ability to drive motor vehicles, so there would be nothing wrong from preventing them from obtaining driver's licenses. When people say that *only* sentient organisms have an interest in life, that seems somewhat arbitrary and unscientific. Plants are living organisms too -- they might not suffer *as much* upon being killed, but there is no scientific principle that justifies killing them. Humans can seek to minimize their harm by minimizing their adverse effects on all elements of nature, but this is a difficult path. Some people simply do not care about being judiciously rational and living according to internally consistent, implicit moral principles, and so we have many people who kill animals for the stupidest of reasons. Humans are overwhelmingly egocentric, but rationality might help to offset some of the inconsistencies that stem from that bias.

rehoot
 
Posts: 161
Joined: Wed Dec 15, 2010 7:32 pm

Re: 8 basic principles of ethics and other stuff

Postby Stijn Bruers on 2012-01-15T23:21:00

rehoot wrote:Your target audience is ultimately people who disagree with you on the moral status of animals, and they do not share your intuitions. Your "Moral intuition 2" is what *you* feel, not what a selfish person feels, and the selfish person will probably see no reason to undertake the training that would be needed to change empathic responses. A person who denies "Moral intuition 2" will then deny "Particular ethical principle 2" and your subsequent argument will fall on deaf ears.

that's true, I cannot convince psychopaths. At best they can be rational egoists. But most people I know, do have intuitions such as impartiality. Most people I know, do not have a consistent rational egoist ethics, but an inconsistent speciesist ethics.

I happen to believe in something very similar to your principles, but I perceive them as a personal ethic and not as objective moral principles. Because my beliefs are not objective, I have no basis for saying that other people should do as I do--HOWEVER...

I don't believe in objectivity either, but I tend to say to people that they should do this and that. (Of course I use language that might work better psychologucally speaking, but the intention is to change the behaviour of people)

Those who are judiciously rational will find it difficult to draw arbitrary ethical distinctions between friends, neighbors, strangers, all humans, all primates, all animals and so forth depending on the details.

I like this sentence :-)

There is a lot of background to that. As of recently, I do not believe that life has intrinsic value even though my empathic responses are consistent with such a view.

In my view, intrinsic value is the opposite of instrumental value, and I give intrinsic value to others. Like arrows that I create, directed towards others. And I want other people to do the same, to cretae arrow that point in the same direction.

I do not believe that there are objective moral principles and do not believe that my emotional responses indicate the existence of objective moral principles. When people talk of objective moral principles, they often leave the details vague--but what is it supposed to be? Is there a physical object in the universe that says what is good or bad? Is there an attribute of a physical object that defines what is good or bad or right or wrong? I have no evidence that anything of this sort exists.

neither have I. Then I apply Ockham's razor...

(i.e., moral intuitions are an illusion if the intuition is that the principle is objective and intersubjectively reliable).

one remark: when I speak of intuitions, I do not mean the intuitionist perspective. I do not believe in a "Moorean" objective, non-natural reality of moral properties that we can intuitively see.

The rational person now has a dilema: is there any scientific justification for making an ethical distinction between my friends and other people who live in my city? If Hume is correct (and I think he is), there is no way to go from what *is* to *what ought to be* (is ought problem). In many cases, there is no justification for drawing arbitrary ethical distinctions on who should be harmed. The result is that I have to either refrain from harming everybody or be willing to harm anybody. Upon closer inspection, the rational person will probably choose to avoid harming others. There is no moral law that says that people must be rational and nothing in the universe that systematically endows people with the ability to be rational.

interesting. What is the relation now between rationality and impartiality?

Some people do not care if their actions reflect logically contradictory desires or principles--I see it as a goal of rational people to help others develop a strong sensitivity to logical contradiction.

I do notice that a lot of people value consistency. even meat eaters do. I can give a lot of examples of arguments that meat eaters give, that indicate that they value consistency. One problem though; they are not consistent in applying the rule of consistency :-)

Stijn Bruers
 
Posts: 3
Joined: Sun Jan 15, 2012 8:06 pm

Re: 8 basic principles of ethics and other stuff

Postby RyanCarey on 2012-01-16T01:54:00

Hi Stijn, I've taken a look at your 8 basic principles.

As a utilitarian, I'm very happy to see you promote this idea. Because it's a system that's simple, elegant and inclusive of some parts of consequentialism.

I have a couple of questions for you:
1. Do you think that some of your principles can be reduced into others? For example, I wonder if universal love is a part of the golden rule or vice versa. Similarly, as a consequentialist, I don't give a second thought to retributions and punishments, so I wonder whether your 5th principle is already encompassed by the first.
2. What do you think about the suffering of wild animals? Wild animals suffer extremely, mostly to predation. They are some of the worst off sentient creatures. They often have their bodily integrity violated by predators. Clearly they are no less deserving of our love than factory farmed animals. Utilitarians speculate that one day we may be able to reduce predation (or the suffering caused by it) without euthanasing predators en masse. Perhaps by distributing a gene for a reduced perception of pain, by relocating predators, or re-engineering their feeding habits. These kinds of moves might go against your second principle, the basic right of living beings and your ideas of 'ecological justice'. I would be interested to hear what kind of position you would take on an issue where the magnitudes of suffering are so large.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: 8 basic principles of ethics and other stuff

Postby Stijn Bruers on 2012-01-16T14:47:00

RyanCarey wrote:Hi Stijn, I've taken a look at your 8 basic principles.

As a utilitarian, I'm very happy to see you promote this idea. Because it's a system that's simple, elegant and inclusive of some parts of consequentialism.

I have a couple of questions for you:
1. Do you think that some of your principles can be reduced into others? For example, I wonder if universal love is a part of the golden rule or vice versa. Similarly, as a consequentialist, I don't give a second thought to retributions and punishments, so I wonder whether your 5th principle is already encompassed by the first.

indeed, that might be true. Universal love is more a kind of virtue ethics, but it fits well with some other principles (1,...)

2. What do you think about the suffering of wild animals? Wild animals suffer extremely, mostly to predation. They are some of the worst off sentient creatures. They often have their bodily integrity violated by predators. Clearly they are no less deserving of our love than factory farmed animals. Utilitarians speculate that one day we may be able to reduce predation (or the suffering caused by it) without euthanasing predators en masse. Perhaps by distributing a gene for a reduced perception of pain, by relocating predators, or re-engineering their feeding habits. These kinds of moves might go against your second principle, the basic right of living beings and your ideas of 'ecological justice'. I would be interested to hear what kind of position you would take on an issue where the magnitudes of suffering are so large.

that's the most difficult question yes. My intuition says that we should not interfere in predation, not have a duty to protect the zebra when she is attacked by a lion. I have two tentative answers. The first is based on uncertainty aversion combined with ecological side-effects that might happen after predators go extinct. The other is based on a triple-N-principle: when a behavior is normal, natural and necessary, is it allowed to violate right and violate prioritarianism. This is related to the value of biodiversity: a thing is natural when it originated directly by evolution, and biodiversity is nothing but everything that originated by evolution. So when a behavior is (a) normal (happens a lot), (b) natural and (c) necessary, it means that (a) a lot of (b) biodiversity will (c) get lost when that behavior stops. With predation: if I have the duty to protect this zebra, I should (categorical imperative) will that this rule becomes a universal law, or in other words: I should will that all predators can no longer hunt, and hence die. But then a lot of biodiversity gets lost, and the value of a lot of biodiversity trumps the value of well-being. See prnciple 4 and exception of principle 3. Two side remarks: although we do not have a duty to protect prey, we are allowed to do so if we feel empathy with the prey. And we also have a duty to look for solutions so that it is no longer necessary for predators to hunt. These are the solutions that you mentioned. I don't see why these interventions would go against my second principle.

Stijn Bruers
 
Posts: 3
Joined: Sun Jan 15, 2012 8:06 pm

Re: 8 basic principles of ethics and other stuff

Postby rehoot on 2012-01-17T04:07:00

Stign Bruers wrote:What is the relation now between rationality and impartiality?


Impartiality is the "hard question" of ethics. Without it, people can feel justified in acting selfishly. I started a reply by modifying:
http://stijnbruers.wordpress.com/2011/1 ... -equality/

I eventually made enough changes that the original document was not visible. The numbering system was originally going to follow Bruer's but it devolved into a mess.

My version reflects my latest view of philosophy (radical metaphysical naturalism that leads to something similar to negative utilitarianism or well-being-based consequentialism through vigilant adherence to reason). My general approach is to start with things that almost everybody would agree to, then proceed one step at time in a way to appeal to the reason of even the most selfish person if that person is strictly rational. The scope of moral consideration first extends to people, then beyond. The argument ultimately rests on the willingness and ability of people to be VIGILENTLY rational. Some people lack the capacity to do so, so they will continue to act selfishly (as other animals do). The inability of some humans to direct their actions with reason has no bearing on the philosophical truth of what I suggest here (although there could be many errors here because I don't think straight).

The key assumption might be that my preferences imply that the society of rational beings around me act in a certain way and not deviate from the principles that guide those actions. If people deviate, then my preferences (to not be harmed or whatever) are not realized. Thus, my preference for one thing implies that I seek a *social system* that is guided by strong principles. It becomes irrational to act against the establishment of such a system while my explicitly stated preferences are for such a system to exist. I then extend to sentient beings based on the inability to defend an ethical distinction between humans and sentient beings. I'll see how this might apply to further ideas later.

I classified some of Bruer's "facts" as "personal facts" or observations that I have made that apply to me but that might be different for other people. People who start with different fundamentals might reach different conclusions, but I think the basics listed here represents a set of boundaries within which rational people would act. I added Fact 0a-0d which is the foundation of my latest philosophy.

*** BEGIN ****

(Personal) Fact 0a : My primary epistemological goal is to understand truth. In other words, I strive to understand things accurately, eliminate self-contradiction, seek to resolve important conflicts rather than ignore them. I don't see this as a virtue because there is no attribute of the universe that says that it is such. I am not aware of any principle of the universe that says that truth is objectively good or bad or that people must seek truth.

(Personal) Fact 0b : Rationality is a process that is instrumentally valuable toward the goal of understanding truth and also instrumentally valuable in the effort to obtain any goal (like being happy or making lots of money or planning a party). I also seek rationality because of my neurotic obsession with logic and reasoning.

Fact 0c : I live in a society without which I would not be able to speak, read, write, own any electronic goods, have access to medicine, or have any of the other luxuries of human life because all those things were made possible by people around me and people who lived before me (I might use this later to motivate some degree of self-restraint).

Fact 0d: I live in an ecosophere upon which I am dependent for survival.

(Personal) Fact 1a: I feel good when my well-being is high.

Personal Goal 1: Improve my well-being now

Fact 2a: Other people (close relatives and friends) sometimes suffer.

(Personal) Fact 2b: I feel empathy with those who suffer.

Personal Preference 2: I want beings with whom I feel empathy to be free from harm. (This is my negative utilitarianism version, others might word it more like Bruer's, but even positive utilitarians should agree with this part. If a person makes a different statement here, the following might need to be adjusted to address any meaningful differences.)

Insight 3a: Because I have these personal preferences and goals, it would be internally inconsistent (irrational) for me to accidentally work against these goals.

Insight 3b: Ideally, I would want to know the full consequences of my actions so that I do not unknowingly cause my self harm through the indirect or unrecognized consequences of my actions.

Insight 3c: Because the optimal conditions for maintaining my well-being would require stability over a long period of time (e.g., if people around me were peaceful for one year then indulged in rampant thrill-killing, my long-term well-being would suffer), the optimal conditions that I would prefer would be those in which there is some type of system to foster continuance of favorable conditions.

Insight 3d: I would prefer a system that is guided by stable rationality (instead of arbitrary force) so that I am not singled out as the arbitrary victim of unjustified harm.

Insight 3e: The design of such a system might be possible if rational beings develop their capacity to direct their behavior according to reason. Irrational beings might not be able to contribute to this process.

Insight 3f: a rational system means that there are no arbitrary ethical distinctions between those who are harmed or not harmed. If there is no scientific evidence that clarifies that a particular being is unaffected by a potentially harmful action, then there is no rational justification for harm in that case. Example: scientific evidence can establish that a mouse cannot drive a car safely through a city, so there is no harm in preventing a mouse from obtaining a motor vehicle. Merely noting differences between two groups typically does not constitute scientific evidence in support of an ethical distinction (e.g., different hair color or intelligence is not a rational criteria to enslave one group of people versus another--this is a long discussion for the uninitiated).

Insight 3g: based on insight 3f, I cannot justify an ethical distinction between myself and others, so my preference would be to prevent harm to all people instead of allowing all harm to all people.

Insight 3h: if there is no course of action that is free of harmful effects (things that violate my list of preferences), the best option would be to choose the path that produces minimal harm, because that would be most consistent with the personal goals that I have.

Insight 3i: people who work against the system threaten the beings who are directly harmed and perhaps threaten the stability of the entire system by setting a bad example for others.

Universalized Preference 1a: Considering the information above, I want to live in a society that is governed by a rational ethical system according to which people are not harmed and in which rational beings refrain from hindering people from pursuing their well-being in ways that do not cause harm.

(Personal) Fact 4a: Despite what I said above, I sometimes have selfish desires for a system that benefits me (and sometimes people close to me) at the expense of others.

Insight 4a: Fact 4a implies that I would want a system in which I am exempt from the rules that I want everybody else to follow. That system relies on an arbitrary ethical distinction between me and others. The distinctions are arbitrary because there is no scientific basis to justify special treatment for me. Because The distinction is not supported by scientific evidence (viewed from a point of view of metaphysical naturalism) it is unhinged from reason and conflicts with previous preferences for rationality. Due to fact 0a and 0b, I need to resolve this conflict.

Insight 4b: It is implausible that everybody around me will agree to a system in which I harm others (steal from them, kill them, injure them through negligence...) while they judiciously avoid harming me. Such a system might be created if I become a military dictator or otherwise use force. An irrational system that is constructed around arbitrary power that benefits me is not much different from an arbitrary system that benefits one of the 7 billion+ other humans or one of the trillions of other life forms on this planet. More specifically, a system that benefits me is not much different from a system that would benefit my military commanders who could kill me and redirect the system to benefit them (ad infinitum). An arbitrary system that reinforces the idea of arbitrary power would probably lead to more of the 7+ billion people trying to play that game thereby decreasing the probability of my continued exploitation of others and increasing the probability that others will harm me to steal what I have.

Insight 4c: In addition to insight 4b, **the lack of evidence to justify ethical distinctions related to fact 4a is by itself sufficient basis for rejecting a system that is derived from fact 4a.** The basis for rejecting it is that it is irrational, and nothing more. Note that this implies that doing something that is logically contradictory to my own list preferences is irrational regardless of the net effect of utility and I am suggesting that people who want to be rational will avoid this internal self-contradiction.

Preference 4: I would prefer a system in which rationality is king and there are no arbitrary distinctions.

Universalized Preference 2: Preference 1a implies that I want rational beings to refrain from harming me (and others), and this would require them (us) to exercise some degree of self-regulation.

Insight 5a: From insight 4c, I have no scientific basis for an ethical distinction that would justify making me a special exception to the system in which I want to live (meaning that I too will need to exercise self-regulation).

Fact 5a: (this was higher on Bruer's list) Not only close relatives and friends can suffer. All beings with a sufficiently complex functioning central nervous system (such as vertebrate animals) can suffer.

Fact 5b: It is possible to feel empathy with all sentient beings (although people who do not feel this way might not care about this statement).

Insight 5b: Just as I am unable to justify arbitrary ethical distinctions among humans (insight 3f and 3g), I am unable to justify arbitrary ethical distinctions between humans and nonhuman sentient beings (insert list of arguments that explain why science cannot make ethical distinctions in this domain).

Insight 5c: based on fact 2, I prefer that I am not harmed and my friends are not harmed, and because I cannot rationally make ethical distinctions between my friends and others, I extended my preference that rational beings refrain from harming any person (Universalized Preference 1a). Because I cannot justify an arbitrary ethical distinction between humans and nonhumans with regard to infliction of harm (Insight 5b), I cannot justify a system under which rational beings fail to extend the policy of no harm to all sentient beings.

Universalized Preference 3: Considering the information above, I want to live in a society that is governed by a rational ethical system according to which *sentient beings* are not harmed and in which rational beings refrain from hindering *sentient beings* from pursuing their well-being in ways that do not cause harm (to sentient beings?).

[it extends from here...]
******
MORE COMMENTARY

This is an outline of a PROCESS according to which people evaluate their OWN preferences and consider the implications of those preferences. I have some notes on the principles that I wanted to use to construct these arguments, and maybe I'll organize those and post them later.

rehoot
 
Posts: 161
Joined: Wed Dec 15, 2010 7:32 pm


Return to General discussion