Stign Bruers wrote:What is the relation now between rationality and impartiality?
Impartiality is the "hard question" of ethics. Without it, people can feel justified in acting selfishly. I started a reply by modifying:
http://stijnbruers.wordpress.com/2011/1 ... -equality/I eventually made enough changes that the original document was not visible. The numbering system was originally going to follow Bruer's but it devolved into a mess.
My version reflects my latest view of philosophy (radical metaphysical naturalism that leads to something similar to negative utilitarianism or well-being-based consequentialism through vigilant adherence to reason). My general approach is to start with things that almost everybody would agree to, then proceed one step at time in a way to appeal to the reason of even the most
selfish person if that person is strictly rational. The scope of moral consideration first extends to people, then beyond. The argument ultimately rests on the willingness and ability of people to be VIGILENTLY rational. Some people lack the capacity to do so, so they will continue to act selfishly (as other animals do). The inability of some humans to direct their actions with reason has no bearing on the philosophical truth of what I suggest here (although there could be many errors here because I don't think straight).
The key assumption might be that my preferences imply that the society of rational beings around me act in a certain way and not deviate from the principles that guide those actions. If people deviate, then my preferences (to not be harmed or whatever) are not realized. Thus, my preference for one thing implies that I seek a *social system* that is guided by strong principles. It becomes irrational to act against the establishment of such a system while my explicitly stated preferences are for such a system to exist. I then extend to sentient beings based on the inability to defend an ethical distinction between humans and sentient beings. I'll see how this might apply to further ideas later.
I classified some of Bruer's "facts" as "personal facts" or observations that I have made that apply to me but that might be different for other people. People who start with different fundamentals might reach different conclusions, but I think the basics listed here represents a set of boundaries within which rational people would act. I added Fact 0a-0d which is the foundation of my latest philosophy.
*** BEGIN ****
(Personal) Fact 0a : My primary epistemological goal is to understand truth. In other words, I strive to understand things accurately, eliminate self-contradiction, seek to resolve important conflicts rather than ignore them. I don't see this as a virtue because there is no attribute of the universe that says that it is such. I am not aware of any principle of the universe that says that truth is objectively good or bad or that people must seek truth.
(Personal) Fact 0b : Rationality is a process that is instrumentally valuable toward the goal of understanding truth and also instrumentally valuable in the effort to obtain any goal (like being happy or making lots of money or planning a party). I also seek rationality because of my neurotic obsession with logic and reasoning.
Fact 0c : I live in a society without which I would not be able to speak, read, write, own any electronic goods, have access to medicine, or have any of the other luxuries of human life because all those things were made possible by people around me and people who lived before me (I might use this later to motivate some degree of self-restraint).
Fact 0d: I live in an ecosophere upon which I am dependent for survival.
(Personal) Fact 1a: I feel good when my well-being is high.
Personal Goal 1: Improve my well-being now
Fact 2a: Other people (close relatives and friends) sometimes suffer.
(Personal) Fact 2b: I feel empathy with those who suffer.
Personal Preference 2: I want beings with whom I feel empathy to be free from harm. (This is my negative utilitarianism version, others might word it more like Bruer's, but even positive utilitarians should agree with this part. If a person makes a different statement here, the following might need to be adjusted to address any meaningful differences.)
Insight 3a: Because I have these personal preferences and goals, it would be internally inconsistent (irrational) for me to accidentally work against these goals.
Insight 3b: Ideally, I would want to know the full consequences of my actions so that I do not unknowingly cause my self harm through the indirect or unrecognized consequences of my actions.
Insight 3c: Because the optimal conditions for maintaining my well-being would require stability over a long period of time (e.g., if people around me were peaceful for one year then indulged in rampant thrill-killing, my long-term well-being would suffer), the optimal conditions that I would prefer would be those in which there is some type of system to foster continuance of favorable conditions.
Insight 3d: I would prefer a system that is guided by stable rationality (instead of arbitrary force) so that I am not singled out as the arbitrary victim of unjustified harm.
Insight 3e: The design of such a system might be possible if rational beings develop their capacity to direct their behavior according to reason. Irrational beings might not be able to contribute to this process.
Insight 3f: a rational system means that there are no arbitrary ethical distinctions between those who are harmed or not harmed. If there is no scientific evidence that clarifies that a particular being is unaffected by a potentially harmful action, then there is no rational justification for harm in that case. Example: scientific evidence can establish that a mouse cannot drive a car safely through a city, so there is no harm in preventing a mouse from obtaining a motor vehicle. Merely noting differences between two groups typically does not constitute scientific evidence in support of an ethical distinction (e.g., different hair color or intelligence is not a rational criteria to enslave one group of people versus another--this is a long discussion for the uninitiated).
Insight 3g: based on insight 3f, I cannot justify an ethical distinction between myself and others, so my preference would be to prevent harm to all people instead of allowing all harm to all people.
Insight 3h: if there is no course of action that is free of harmful effects (things that violate my list of preferences), the best option would be to choose the path that produces minimal harm, because that would be most consistent with the personal goals that I have.
Insight 3i: people who work against the system threaten the beings who are directly harmed and perhaps threaten the stability of the entire system by setting a bad example for others.
Universalized Preference 1a: Considering the information above, I want to live in a society that is governed by a rational ethical system according to which people are not harmed and in which rational beings refrain from hindering people from pursuing their well-being in ways that do not cause harm.
(Personal) Fact 4a: Despite what I said above, I sometimes have selfish desires for a system that benefits me (and sometimes people close to me) at the expense of others.
Insight 4a: Fact 4a implies that I would want a system in which I am exempt from the rules that I want everybody else to follow. That system relies on an arbitrary ethical distinction between me and others. The distinctions are arbitrary because there is no scientific basis to justify special treatment for me. Because The distinction is not supported by scientific evidence (viewed from a point of view of metaphysical naturalism) it is unhinged from reason and conflicts with previous preferences for rationality. Due to fact 0a and 0b, I need to resolve this conflict.
Insight 4b: It is implausible that everybody around me will agree to a system in which I harm others (steal from them, kill them, injure them through negligence...) while they judiciously avoid harming me. Such a system might be created if I become a military dictator or otherwise use force. An irrational system that is constructed around arbitrary power that benefits me is not much different from an arbitrary system that benefits one of the 7 billion+ other humans or one of the trillions of other life forms on this planet. More specifically, a system that benefits me is not much different from a system that would benefit my military commanders who could kill me and redirect the system to benefit them (ad infinitum). An arbitrary system that reinforces the idea of arbitrary power would probably lead to more of the 7+ billion people trying to play that game thereby decreasing the probability of my continued exploitation of others and increasing the probability that others will harm me to steal what I have.
Insight 4c: In addition to insight 4b, **the lack of evidence to justify ethical distinctions related to fact 4a is by itself sufficient basis for rejecting a system that is derived from fact 4a.** The basis for rejecting it is that it is irrational, and nothing more. Note that this implies that doing something that is logically contradictory to my own list preferences is irrational regardless of the net effect of utility and I am suggesting that people who want to be rational will avoid this internal self-contradiction.
Preference 4: I would prefer a system in which rationality is king and there are no arbitrary distinctions.
Universalized Preference 2: Preference 1a implies that I want rational beings to refrain from harming me (and others), and this would require them (us) to exercise some degree of self-regulation.
Insight 5a: From insight 4c, I have no scientific basis for an ethical distinction that would justify making me a special exception to the system in which I want to live (meaning that I too will need to exercise self-regulation).
Fact 5a: (this was higher on Bruer's list) Not only close relatives and friends can suffer. All beings with a sufficiently complex functioning central nervous system (such as vertebrate animals) can suffer.
Fact 5b: It is possible to feel empathy with all sentient beings (although people who do not feel this way might not care about this statement).
Insight 5b: Just as I am unable to justify arbitrary ethical distinctions among humans (insight 3f and 3g), I am unable to justify arbitrary ethical distinctions between humans and nonhuman sentient beings (insert list of arguments that explain why science cannot make ethical distinctions in this domain).
Insight 5c: based on fact 2, I prefer that I am not harmed and my friends are not harmed, and because I cannot rationally make ethical distinctions between my friends and others, I extended my preference that rational beings refrain from harming any person (Universalized Preference 1a). Because I cannot justify an arbitrary ethical distinction between humans and nonhumans with regard to infliction of harm (Insight 5b), I cannot justify a system under which rational beings fail to extend the policy of no harm to all sentient beings.
Universalized Preference 3: Considering the information above, I want to live in a society that is governed by a rational ethical system according to which *sentient beings* are not harmed and in which rational beings refrain from hindering *sentient beings* from pursuing their well-being in ways that do not cause harm (to sentient beings?).
[it extends from here...]
******
MORE COMMENTARY
This is an outline of a PROCESS according to which people evaluate their OWN preferences and consider the implications of those preferences. I have some notes on the principles that I wanted to use to construct these arguments, and maybe I'll organize those and post them later.