jason wrote: What I can't bring myself to endorse is the view that it's morally required to fill the universe with happy lives. To me, that seems morally neutral.
I hold the same view.
jason wrote:While I suppose I would say that a word with 100 happy beings and 5 suffering beings is better than a world with 10 happy beings and 5 suffering beings. I wouldn't say it's a very good thing for 90 more happy beings to be created in the latter world, while I would say helping the 5 not to suffer so much is very good.
Then what do you mean by "better" in the first sentence? Is it just a lexical preference that gets overiden as soon as you're also able to affect the total amount of suffering?
jason wrote:Utilitronium has zero appeal.
jason wrote:That said, bringing about the extinction of sentients doesn't seem like a good thing to me, largely because so many of us want to continue existing and reproducing and because bringing about extinction would, besides running contrary to those desires would almost invariably cause a huge amount of suffering.
If you think a preference to go on existing being thwarted is negative in itself, then this would likely be some sort of preference utilitarianism. I think
negative preference utilitarianism is a consistent position that gives very intuitive (for population ethical standards anyway) conclusions. The most counterintuitive aspect of it, apart from the general arguments against why preferences matter as opposed to experiential states, is that the creation of an almost perfect life is negative to the extent it still has some thwarted preferences in it. This doesn't seem more counterintuitive than the repugnant and very repugnant conclusions, and on top of that, the negative population ethics seem to fit very well with preference utilitarianism from a more top-down/theoretical perspective. Consider the individual case: Is it morally urgent to add a new preference to an individual that already has preferences, all else being equal (i.e. none of the pre-exisitng preferences becoming more violated or more fulfilled)? It seems not, not even if the new preference is completely fulfilled. The intuition behind the negative preference view can be translated into "solving problems" as opposed to "creating solved problems". Furthemore, consider the odd implications of classical preference utilitarianism: you would want to maximize the surplus of satisfied preferences over unsatisfied ones, but since there is no content-requirement for preferences, you could just tile the universe with beings who have very easily satisfiable preferences (very unlike humans), which would be an even more pointless endeavor than the creation of utilitronium. (Prference utiltiarianism often also comes in prior-existence varieties inspired by Peter Singer, but I'm pretty sure such views are inconsistent because of transitivity issues.)
Alternatively, if you think that only experiential states matter, your worry about people's preferences to go on existing would, it seems, just reflect a concession that the view you're proposing seems very counterintuitive. I agree with some of the points made above that the position you advocate does seem like it might be inconsistent once you flesh it out in detail. I think such inconsistency is hard or impossible to avoid if you use to common framework of "positive and negative welfare". This is sometimes called welfarist axiologoy, axiology being your "theory of what matters" or, in consequentialist terms, your definition/axiom for "utility". If you say happiness is positive (i.e. ethically preferable over non-existence) for existing beings but not for beings that don't exist, you're introducing the category "existing being" as an ethically relevant entity. On a
reductionist account of personal identity, it becomes questionable whether you can keep up this category and its ethical relevance. The reductionist account would imply that every split-second, a "new" person is coming into existence, as opposed to a numerically identical person going on existing over time (what could that possibly mean?). It seems that, given that the reductionist account of personal identity is correct, all of ethics turns into population ethics. Not killing a being (by omission) becomes in consequentialist terms ethically equivalent to creating a new being (by action). And if the happiness of existing beings can make up for their suffering, why not also the happiness of "new" beings?
The reductionist account at first exerts a strong pull towards classical utilitarianism. However, I think it has been overlooked that you can bite the bullet the other way as well, and possibly even more elegantly so: When people are imagining whether they would accept some suffering (e.g. walking over hot sand) in order to be happy later (e.g. swimming in the ocean), they think that the future person will still be them, and the choice they are making is egoistic rather than altruistic. Now, I see consequentialist ethics as playing the game of figuring out what some intuitively plausible notion of "systematized altruism" would imply. And interestingly, when people are asked the same question
in an altruistic framing, the answer turns out to be much less classically-inclined! If you asked people whether they would want to simulate some painful hot sand states over here, and then simulate some happy ocean states elsewhere over there, a lot of people would reply something like "Why would I want to create these happy states? And no, you
shouldn't create suffering states!" It seems that the egoistic case is dominated by an impression of personal identity and evolutionary tendency to want to go on living and experience cool stuff. When we just look at the experiences themselves, in the moment in isolation, it becomes much less clear whether the same "exchange rate" should apply also in cases characterized as altruistic.
I think that strict
negative hedonistic utilitarianism becomes consistent and much more intuitive if you adopt a different axiology than classical utilitarians use. Negative utilitarianism requires a Buddhist view on suffering and contentment. This view claims that happiness is not in the same way morally urgent as the prevention of suffering. Happiness according to this view is ethically equivalent to states that are absent of any cravings or longings. There seems to be nothing wrong whatsoever with hedonistically neutral flow states with a low level of self-awareness and time being experienced as flying. Likewise, a Buddhist in a meditative state absent of an cravings seems to be a perfectly fine thing as well, and it is hard to see why there should be moral urgency to turn such a state into orgasm. This Buddhist view focuses on the moments itself, not on how much we desire certain moments when we imagine them from the outside. If in the moment you're in, you have no desire to get out of your state of consciousness or change something about it (this is what constitutes suffering according to this view!), then everything is perfectly fine also in ethical terms, because the ethical "ought" would correspond to the aggregated internally felt "wants" of all consciousness-moments.
Adriano Mannino and I have written drafts for papers on both negative hedonistic and negative preference utilitarianism. Please (for anyone) PM me on facebook (Lukas Gloor) if you're interested in reading the drafts.
I should add that technically it is also possible to have a negative view on population ethics and be a prioritarian at the same time, but I suspect that a lot of the appeal of prioritarianism is also present in negative utilitarianism. And prioritarianism has the disadvantage (if one cares about this) that there are infinite prioritarian weighting functions that all seem equally plausible on the face of it.