I just found this board a few weeks ago, after becoming a vegan a few months ago, and I was pretty excited to see people talking rationally about the ethics of using animals (before that, most of the vegans I encountered said things like (not an exact quote): "I just can't believe that all that pain doesn't end up in the food.").
I have been lurking up until now, but I figured I would post my thoughts on ethics so far, in case anyone found them interesting.
First of all, I don't think morality is objective, and by that I mean that morality is not part of the fabric of the universe. It doesn't have any special place metaphysically (I hope I am using that word right because I have never studied metaphysics). I think it is basically software embedded in the minds of agents. Despite this, I put a lot of effort into trying to make my own software consistent. Instead of asking myself "What do I feel is right now?" I try to ask "What would I feel is right, given that I could reflect on it (with an interest in being consistent) for an unlimited amount of time?" The other thing is that though I think every statement like "X is wrong" is an opinion, that does not mean that I think other people with their own codes of morality should be allowed to act on them. This is because I think the statement "You should always respect other people's opinions if they are not factually wrong" is itself an opinion, and it's one I disagree with.
This is an ongoing process. A lot of the specifics are not completely worked out, but here is what I have, as the best explicit description I can make of what I consider right and wrong.
The first part of my philosophy is a modified piece of scalar utilitarianism, which (as I understand it) avoids requiring us to spend every moment doing the most good we can by saying that actions are not right or wrong, they are just better or worse than other actions (Instead of saying the (only) right action is doing the most good you can). I say that actions are right or wrong in degrees, but also that there is a 0-point where an action is neither right or wrong. So things above that are not just better, they are right, and things below that are not just worse, they are wrong.
Actions are scaled as more right or more wrong depending on the amount of good or bad that the actor would (probabilistically) expect them to bring about given that they made the best calculation they could (or in the case of moral situations they can reasonably expect to be of smaller importance than the effort to think them through in great detail, just an amount of effort proportional to what's at stake). The 0-point (on the scale of how right an action is) is the rightness of noninterference, besides to fulfill responsibilities accrued.
Responsibilities can only be accrued by bringing about situations where bad will happen unless you intervene after the action that causes the situation. So if you have a child, you can't let it starve and say "Hey, that is inaction. Don't blame me." because you brought it into the situation where it would be in danger of starving in the first place (being alive).
However, if someone else's child is starving in the street (and you didn't, in some other way, put it in that situation), it is not wrong to just walk by.
How you tell how good or bad a consequence is is by looking the weighted sum of the priority of each individual that is affected multiplied by the detriment they receive, or are prevented from receiving. A detriment is if something happens that goes against a preference they have about what should not happen to themself which is held for selfish reasons, or keeps them from fulfilling a selfish positive preference they could fulfill without negatively affecting others that they could already (before interference) do on their own.
Weighing the importance of preferences of different individuals (of equal priority) is done by looking at which preference is a greater fraction of all of the things they selfishly prefer.
Preferences by one individual about another (or about inanimate objects) are also ignored. Things that are preferred just to get other things are ignored (only "end" preferences count, not "means" ones). In the case of conflicting preferences, all but the most specific is ignored.
Priority is a number between 0 and 1 that is decreased temporarily for intending to do wrong (lasting as long as the intention remains), or decreased more permanently by acting on that intention. It is restored by changing in such a manner that one would not repeat the wrong.
Many of the aspects of this I'm not completely sure about, and have thought of a few alternatives to that I could come to favor some time. I'm pretty comfortable with most of the results of thought experiments I can think of for these principles, including most of the ones I've thought of that are controversial, such as:
"Person A wants to die. They prefer not to live. They have no dependents. So they try to commit suicide. Person B intervenes, preventing them from doing so. Person A is then administered drugs that change their views on the matter, and after that they wish to live," where the conclusion is that what B did was wrong because any benefits A might experience later in life do not outweigh that their preference to die was violated.
"A footbridge runs over a trolley track, on which five people are tied down. A trolley will kill them unless you push a fat man off the bridge in front of it." where the conclusion is that it is right to push him off.
"You come across a person drowning, whom you could easily save." where the conclusion is that it is right to save them, but neither right or wrong to just walk by.
Some of the thought experiments I like the answers to from this philosophy more than utilitarianism's are:
"Person A is despised by N other people. A has done nothing to deserve their hatred. They would all be happy if A was killed. A wishes not to die. The N people cannot be deceived about whether A has died. No one but the N people and A will know about or be affected by what happens." where utilitarianism would conclude that for some large N, it was right to kill A, and my philosophy would conclude that no matter the size of N, the preferences of the crowd are all preferences about what should happen to another person, and thus discounted.
"Person A cannot stop thinking about philosophical questions that cause him great discomfort. If A was lobotomized, he would forget about all of them (and forget that he was lobotomized) and pursue (successfully) things that would bring him pleasure. A lives in isolation and the philosophical conclusions he reaches will never affect anyone else. A wishes not to be lobotomized. A could be lobotomized without his foreseeing it (and thus perhaps suffering from the fear of it) by performing the procedure as he was willingly sedated for what he thought was a different surgery" where ordinary utilitarianism concludes that he should be forcibly lobotomized, and my philosophy (or any kind of preference-based utilitarianism) says he should not be.
"Person A will have as much pleasure in the remainder of his life as he will pain (or, for utilitarianism that weights pleasure and pain differently, whatever ratio is necessary so that they balance out). A wishes not to die in spite of this. A lives in isolation and will not affect anyone else." where utilitarianism says that it is not bad that he be killed unexpectedly, instantly, and painlessly by a sniper, and my philosophy says that it is bad.
Sorry if I have misinterpreted utilitarianism incorrectly in drawing any of these conclusions.
I have been lurking up until now, but I figured I would post my thoughts on ethics so far, in case anyone found them interesting.
First of all, I don't think morality is objective, and by that I mean that morality is not part of the fabric of the universe. It doesn't have any special place metaphysically (I hope I am using that word right because I have never studied metaphysics). I think it is basically software embedded in the minds of agents. Despite this, I put a lot of effort into trying to make my own software consistent. Instead of asking myself "What do I feel is right now?" I try to ask "What would I feel is right, given that I could reflect on it (with an interest in being consistent) for an unlimited amount of time?" The other thing is that though I think every statement like "X is wrong" is an opinion, that does not mean that I think other people with their own codes of morality should be allowed to act on them. This is because I think the statement "You should always respect other people's opinions if they are not factually wrong" is itself an opinion, and it's one I disagree with.
This is an ongoing process. A lot of the specifics are not completely worked out, but here is what I have, as the best explicit description I can make of what I consider right and wrong.
The first part of my philosophy is a modified piece of scalar utilitarianism, which (as I understand it) avoids requiring us to spend every moment doing the most good we can by saying that actions are not right or wrong, they are just better or worse than other actions (Instead of saying the (only) right action is doing the most good you can). I say that actions are right or wrong in degrees, but also that there is a 0-point where an action is neither right or wrong. So things above that are not just better, they are right, and things below that are not just worse, they are wrong.
Actions are scaled as more right or more wrong depending on the amount of good or bad that the actor would (probabilistically) expect them to bring about given that they made the best calculation they could (or in the case of moral situations they can reasonably expect to be of smaller importance than the effort to think them through in great detail, just an amount of effort proportional to what's at stake). The 0-point (on the scale of how right an action is) is the rightness of noninterference, besides to fulfill responsibilities accrued.
Responsibilities can only be accrued by bringing about situations where bad will happen unless you intervene after the action that causes the situation. So if you have a child, you can't let it starve and say "Hey, that is inaction. Don't blame me." because you brought it into the situation where it would be in danger of starving in the first place (being alive).
However, if someone else's child is starving in the street (and you didn't, in some other way, put it in that situation), it is not wrong to just walk by.
How you tell how good or bad a consequence is is by looking the weighted sum of the priority of each individual that is affected multiplied by the detriment they receive, or are prevented from receiving. A detriment is if something happens that goes against a preference they have about what should not happen to themself which is held for selfish reasons, or keeps them from fulfilling a selfish positive preference they could fulfill without negatively affecting others that they could already (before interference) do on their own.
Weighing the importance of preferences of different individuals (of equal priority) is done by looking at which preference is a greater fraction of all of the things they selfishly prefer.
Preferences by one individual about another (or about inanimate objects) are also ignored. Things that are preferred just to get other things are ignored (only "end" preferences count, not "means" ones). In the case of conflicting preferences, all but the most specific is ignored.
Priority is a number between 0 and 1 that is decreased temporarily for intending to do wrong (lasting as long as the intention remains), or decreased more permanently by acting on that intention. It is restored by changing in such a manner that one would not repeat the wrong.
Many of the aspects of this I'm not completely sure about, and have thought of a few alternatives to that I could come to favor some time. I'm pretty comfortable with most of the results of thought experiments I can think of for these principles, including most of the ones I've thought of that are controversial, such as:
"Person A wants to die. They prefer not to live. They have no dependents. So they try to commit suicide. Person B intervenes, preventing them from doing so. Person A is then administered drugs that change their views on the matter, and after that they wish to live," where the conclusion is that what B did was wrong because any benefits A might experience later in life do not outweigh that their preference to die was violated.
"A footbridge runs over a trolley track, on which five people are tied down. A trolley will kill them unless you push a fat man off the bridge in front of it." where the conclusion is that it is right to push him off.
"You come across a person drowning, whom you could easily save." where the conclusion is that it is right to save them, but neither right or wrong to just walk by.
Some of the thought experiments I like the answers to from this philosophy more than utilitarianism's are:
"Person A is despised by N other people. A has done nothing to deserve their hatred. They would all be happy if A was killed. A wishes not to die. The N people cannot be deceived about whether A has died. No one but the N people and A will know about or be affected by what happens." where utilitarianism would conclude that for some large N, it was right to kill A, and my philosophy would conclude that no matter the size of N, the preferences of the crowd are all preferences about what should happen to another person, and thus discounted.
"Person A cannot stop thinking about philosophical questions that cause him great discomfort. If A was lobotomized, he would forget about all of them (and forget that he was lobotomized) and pursue (successfully) things that would bring him pleasure. A lives in isolation and the philosophical conclusions he reaches will never affect anyone else. A wishes not to be lobotomized. A could be lobotomized without his foreseeing it (and thus perhaps suffering from the fear of it) by performing the procedure as he was willingly sedated for what he thought was a different surgery" where ordinary utilitarianism concludes that he should be forcibly lobotomized, and my philosophy (or any kind of preference-based utilitarianism) says he should not be.
"Person A will have as much pleasure in the remainder of his life as he will pain (or, for utilitarianism that weights pleasure and pain differently, whatever ratio is necessary so that they balance out). A wishes not to die in spite of this. A lives in isolation and will not affect anyone else." where utilitarianism says that it is not bad that he be killed unexpectedly, instantly, and painlessly by a sniper, and my philosophy says that it is bad.
Sorry if I have misinterpreted utilitarianism incorrectly in drawing any of these conclusions.