Anyway, I posit that the correct form of utilitarianism is act, aggregate, classical, hedonistic, animal welfare (i.e. all beings that feel pain and pleasure) utilitarianism. (Someone remind me if I've forgotten to specify one of the many divides...)
I realize that correct is a very strong word, and perhaps not the correct one to use here. (See what I did there?) So, perhaps I should explain what I mean by this. I think that a philosophy should satisfy the following axioms:
1) Division-invariance: it should not matter how you divide up the universe, or if you subdivide it into multiple mini-universes; they should all give the same optimal action.
2) Deciding: a philosophy should put all possible universes into a totally ordered set (of unknown cardinality).
3) No intuition-fudge factors: this is largely just a particular instance of other rules, but it bears repeating: just because you've been brought up to be revolted by something, or because someone used the word "repugnant" when describing it, doesn't mean you should put a hack in your philosophy to try to get the outcome you want. That's not a philosophy any more, that's just you saying that what you think is right is in fact right.
4) It should be well defined: not something like "I want to maximize utility except when we're dealing with really evil people, like Hitler. I don't care about his happiness."
5) Logically consistent, consequentialist, blah blah blah.
A quick note: I'm defining 0 happiness to be dead/not born/unconscious/unfeeling; I'll explain why this has to be the case later.
And I think that act, aggregate, classical, hedonistic, all sentient beings utilitarianism is the way to go here.
If anyone wants to propose a different variation, I'm all ears.
Below, I'll explain why I think other variations on utilitarianism fail these test:
___________________________________
Average vs. Aggregate:
Act utilitarianism fails axiom (1) quite badly. Take the following hypothetical. Say there are two planets, planet A and planet B. Planet A has 100 people, each at happiness 1. Planet B has 1 person at happiness 1.5. You have a nuclear bomb, and are given the choice of detonating it on planet A, killing everyone except for one resident with happiness 0.8. Do you do it? If you consider planet A to be a universe in and of itself, then the answer is no: average happiness will reduce from 1.0 to 0.8. But if you consider planets A and B together to be one universe, then the answer is yes: average happiness will increase from ~1.02 to 1.15. (I'm adding in the one guy living on planet A at the end so we don't have to deal with division by 0--another lovely property of average utilitarianism.) In another post I can talk about why I think the repugnant conclusion is crap, but this post is going to be long enough as is.
Act vs. Rule:
This one is pretty obvious. Depending on how you define rule utilitarianism, it either reduces to act utilitarianism (if you consider all choices to be possible "rules"), or rests on your definition of a '"rule", making it ill-defined.
Classical vs. Negative:
Negative utilitarianism can have many different definitions. First, though, a general point about them: many are not clear on whether they apply to negative emotions people feel or experiences they have, or to negative total utility of one person at some time. The first of these is going to fail a variant of (1): the correct action is going to depend on how I divide experiences up; for instance if simultaneously I punch you and you win $1,000,000, and the happiness you get from having just won the money is greater than the pain of the punch, then if I combine those two into one experiences it'll be positive and thus not trigger NU, but if I split them up then the punch will be negative and will trigger NU. (If the punch were somehow related, maybe tangentially, to winning the money, then it might not be clear how to split it up.) So, how about versions of NU that only care about people's total happiness functions? I'll try to tackle a few. One is that preventing any harm is more important than any gain; a variant is that there are certain really really bad things which outweigh any potential good. In order to make this a philosophy, you have to better define it. Perhaps, your aggregation method is (number of people whose happiness is below X,total utility), and you compare two situations by first comparing the first entry of the tuple, and using the second as a tie-breaker? Or is it (min(lowest happiness of anyone,X),total utility), with the same method of comparison? There are too many ways of defining for me to talk about all of them now, but if anyone wants to propose a specific one as a philosophy, go ahead. Some of them are going to fail some of my axioms, but others are going to be totally consistent and well defined, just arbitrary and leading to pretty obviously wrong results.
One particular one I will talk about, though, is what Alan Dawrst submitted in the other thread. If I'm interpreting it correctly, it's that the aggregating function is U = sum (over all beings) of {-X*h if h<0, and h if h>0}, where h is happiness of the individual and X is some large positive number. My response to this is: it seems like what you're getting at is that you can imagine really horrible scenarios for people that are much much worse than you can possibly imagine someone's happiness is good; there is nothing that could happen for me that would make up for a few minutes of being burnt at the stake. I agree with this point--it's much easier to make someone very happy than very sad--but it seems to me like this is built in to utilitarianism by the fact that it will generally be the case that really bad things will cause much more negative spikes in utility for a person than really good things cause positive spikes, and the factor of X built in to your proposal is just another way of saying that you originally underestimated how shitty life can get and constructed a utility function that didn't actually go as low as people feel unhappy, and then had to introduce some large coefficient to adjust for it.
Hedonistic and Animal Welfare:
I'm going to group these two together because I think that they address largely the same point. Non-hedonistic (high and low pleasure) utilitarianism and humans-only utilitarianism are both ways of saying, "I like my type of happiness more than yours." (Is it a coincidence that "high pleasure", a concept invented by academics, is generally understood to mean pleasure from academic pursuit, or that a human-pleasure-only system was developed by humans?) Both of these, then, are poster boys for axiom (3): people putting hacks into the philosophy to justify their lifestyles. They also violate axiom (4), as they aren't even that close to well defined. Is an animal descendant, evolutionary, halfway between monkeys and humans human? How about aliens as smart as us? Similarly, what, exactly, is "high pleasure"? What does playing a board game count as? How about listening to music? How about listening to music you disapprove of?
Why 0 is defined as dead:
First, not feeling anything really should contribute 0 utility: neither good nor bad. From a different angle, any other choice of zero will mean that you have to decide how many unborn, unconceived, dead, brain-dead, or imagined people count towards total utility, because they now have an actual contribution. That's a little bit weird: dead people really shouldn't be influencing total utility. If you limit it only to live people, then first of all you're making an arbitrary distinction between people who are dead and people who are in a coma, not feeling anything, and totally brain-dead, but whose hearts are still being pumped by a machine, and second of all you're going to have to find some other point to define to be 0.
Anyway, those are my thoughts. Does anyone want to propose a different type of utilitarianism?