I just want to say that this is a long and all things considered fairly trivial post, and remind you that if you have more important things to do than read and respond, you should definitely do those first.
I still have mixed feelings. Intuitively I feel like the emotion of suffering itself is bad, and preferences don't have much to do with it. But cognitively when I think about other value systems that seem totally wrong to me and then remember the Golden Rule (how would I want them to treat my values?), PU seems more compelling.
As a Christian Agnostic, I'm sympathetic to the Golden Rule, but to be intellectually honest, I have to wonder whether the Golden Rule is necessarily anything more than a very useful heuristic.
This case is harder because the mind isn't rigged. If the guilt is a robust and powerful component of the neural electorate, then presumably it would be right in this case, much as I cringe to think about it. It would be best to avoid creating situations like this, though. We should try to modify people so that they don't feel so much guilt. Also, it's not clear this preference would be stable upon reflection. It may not be an actual idealized preference.
I assume that it is really an actual idealized preference, that for instance, perhaps the person committed some grave crime that involved torturing another and strongly believes in justice and the notion that the punishment should fit the crime. Let's also assume that the person will not gain any pleasure from having his guilt sated, because he feels like he can never truly atone (perhaps because the other who was tortured died of his wounds or something). My own view is that even if the person feels that being tortured is justified, this does not make it correct to torture him. To me, there is something inherently wrong with inflicting suffering, that can only be justified if the suffering leads to more happiness later (for instance, exercise).
My strongest argument is the Golden Rule point in the Postscript of my "Hedonistic vs. Preference" piece. What I ultimately want is for my preferences to be satisfied, so that's what I should want for others. I also mention the libertarian intuition that PU better respects personal autonomy (though still not perfectly, because actual preferences are not idealized, might be myopic, might be perverse, etc.). Finally, many people seem to care about things besides hedonic experience, so (non-realist) moral uncertainty plays some role too.
Is personal autonomy in and of itself good? I am inclined to view it as something that reliably achieves the good, but isn't by itself worthwhile. Otherwise we could argue that freedom is a good, and that any interference is bad.
You're also a (complex) robot that has been programmed (by evolution and development) to do certain things. As you suggest, the crucial distinction is consciousness. I agree there are differences in extent of consciousness and that those differences are morally relevant, but those are differences of degree rather than kind. This discussion of "suffering subroutines" helps illustrate why consciousness is more pervasive than it might seem.
True. And interesting.
The behaviors elicited in response to an apparent expected loss in one's utility function could be seen to constitute suffering whether they involve human-style emotions ("Oh shit! That hurts.
") or a change of plans by a calculating agent. The former elicit more sympathy in us than the latter because they can trigger our mirror-neuron systems and such. At a more abstract level, it's less clear they're fundamentally different. I have mixed feelings here. Obviously my emotions go for the human-style emotions. But what would I want another agent who doesn't have human-style emotions to do? Would I want him to sympathize with his own kind and therefore ignore human-style emotions that are meaningless to him? Or would I want him to respect my preference just because it's a preference, regardless of whether he has robotic sympathies for it?
Well, if both actually constitute a form of suffering, it's arguable that both are bad, regardless of preferences, and that we should get to the bottom of this question of whether or not expected loss in one's utility function is a negative experience to the agent. Respecting preferences seems like a very good heuristic to follow in the meantime though.
The problem I have with preferences is mostly that they are very arbitrary and prone to conflict. It's very easy to hold diametrically opposing preferences, such as in a zero-sum game, and it's not clear how we should go about resolving such conflicts.
Also, take the overused example of 1000 sadists who want to torture a child. While this problem is challenging to both hedonistic and preference utilitarians, at the very least, the hedonists can make an argument that this isn't actually utility maximizing, and that what we should do is teach the 1000 sadists to be happy in non-sadistic ways, wirehead them, or have them play a child torture video game so that no child is actually tortured. But to a preference utilitarian, the preferences of the sadists are specific and can't be changed, and for some reason are morally valuable in and of themselves.
I admit that I would want my preferences to be satisfied, but I am inclined to consider this a bias of being a goal-directed entity. As a thought experiment, I have often wondered what it would be like to have no desires, preferences or values at all. Since these things are arguably programmed into me by evolution and emotions, they aren't really the autonomous choice of the self, but external forces controlling me. But without them, I find that there is no real reason for acting or doing anything. I want to exist because I have emotions that make me want to exist. Pure reason alone can give no real purpose or answer the question of the meaning of life. There has to be some things that we value. And what I find is that what we value regardless of our efforts to ignore what we value, are what we feel. We feel regardless of what we think.
While it's arguable that everything is deterministic, and that all values are therefore forced upon us without choice, I still like to differentiate between the values that seem absolute or required, and the values that seem relative or optional. Absolute or required values are those that are forced on us by our state of being, by our feelings. Relative or optional values are those that we have some capacity to choose. I can choose to prefer one state of the world over another, but I cannot choose to not feel suffering when it happens. Thus, some preferences are arbitrary, while others are vivid and actual.
I guess, perhaps, all other things being equal, the satisfaction of preferences is better than the opposite, that success is better or more often correct than failure. But this correctness needs to be grounded somehow, and I think that the correctness of a preference or goal comes from how it best accomplishes what is right. If success meant destroying the universe, or intentionally creating wrongness, then it wouldn't be correct. Conversely, happiness is simply correct. It is the state that sentient entities should be in because happiness is absolutely valued rather than optional. Happiness > Suffering. Happiness could conceivably lead to bad consequences if for instance, it became associated with sadism. But the happiness itself, even of a sadist, seems to me to be good or correct, and what is wrong rather is that the way in which it is achieved involves badness.
I don't on the other hand, think that Success > Failure, without reservation. The rightness of success and failure is goal-dependent. Happiness and suffering are goal independent. Happiness is often associated with goals because it is an emotional goal state that we often desire, and because accomplishing goals usually leads to happiness, but as an experience, happiness does not actually depend on goals being satisfied. We can be happy simply because we feel so. For instance, a surprise gift from a stranger might make us very happy, even though no goals or preferences were satisfied. Similarly, while people often suffer when they fail at a goal, they also can suffer just because someone decided to out of the blue attack them.
It perhaps can be argued that we have implicit preferences to have good surprises or avoid surprise pain, but then we have to assign a myriad of preferences to people that they don't consciously hold. At which point we are conjecturing about people's true or ideal preferences rather than considering their manifest preferences. To me, the problem with this is that "true preferences" don't actually exist, but are purely an estimate of what a person would think given relevant information and sufficient rationality. This seems exactly as paternalistic as hedonistic utilitarianism, because we are essentially saying that we can know better than the person themselves. Thus, the whole argument that preference utilitarianism respects autonomy depends on accepting manifest preferences.
Just some thoughts.