Hedonic Treader: I basically agree with everything in your post. I mostly asked questions about happiness and suffering to see if someone had objections to those concepts similar to my objections against preference.
peterhurford: I think my view of preferences is easier to understand if you take into account the fact that I have a hard time seeing human minds as unitary wholes, in space or time. We're a collection of different systems designed to keep a heap of meat alive long enough to make some babies. (Not that there's anything wrong with being a heap of meat, of course.) The preferences you talk about vary depending on both time (constantly changing moods, changing opinions over time, etcetera) and space (which part of our collection of systems we look on). How do we decide which instance/part of ourselves should be privileged with creating our "preferences"? By assuming a more knowledgeable/smarter/more "perfect" version of ourselves, all we do is dump the responsibility to an imaginary being that is very different from us.
Of course, all the above is assuming that "preferences" of any sort are even something that can be said to exist in the first place. I might be willing to accept that (as opposed to the existence of the more "serious" preferences PU-people talk about), but in that case they are so trivial that I see no reason to base an entire ethical theory on them. Now, the quality of being experienced as pleasant
, that's not something that changes depending on time of day.
(There's also the fact that the simplest way (for a super-AI of some sort, for example) to make everyone's preferences be fulfilled is to change everyone's preferences to something that's easily fulfilled. Of course, we could decide that such preferences are not as valid as the preferences we had before outward forces changed them, but why should we do that? We change each other's preferences constantly, but if a super-AI used its superior intelligence to change our preferences more efficiently, would those preferences suddenly be less real?)
DanielLC wrote:I would say that the preference system of an agent is the one under which it is most intelligent.
Then we need to define both "an agent" and "most intelligent". Believing in an agent that exists over time requires the belief in personal identity, which I don't have due to Occam's Razor. We don't need to believe in personal identity to explain anything (in fact, not believing in a personal identity solves many philosophical problems). So if we look at a collection of systems and then imagine a similar collection of systems that have existed or might exist, we are imagining a separate collection of systems. We are then imagining what this other collection of systems would say when asked certain questions, alternatively what certain systems within it would think when asked certain question. Based on this we are then supposed to create an ethical system that somehow relates to the first collection of systems.
Me, I'd just say that happiness is good because it feels good and leave it at that. Then again, I fully admit that I'm not very smart and that I might have failed to comprehend certain parts of the PU arguments.
(Note: If I appear to be rude, that's not my intention. I'm simply in a bit of a hurry right now and I try to express my arguments as concisely as possible as they pop into my brain. I can be interpreted as a bit gruff when I do that.)