One of my friends and I have been having an excellent Facebook discussion that I wanted to share more broadly. I've reproduced the most interesting part.
Friend's statement:
My reply:
Friend's statement:
And I still believe Eliezer's position does not make sense, Brian. I also believe you changed the subject (or I didn't make it clear enough). You made the point that "we can care about anything we like" and that every object of our care will seem "equally absurd" from an abstract and objective point of view. I gave arguments against the latter claim but I may grant you the former. (Although I have some reservations there as well, but let's set them aside for the moment.) But how does that address my original point? I wasn't talking about ethics or what we do or should care about at all. My exclusive focus was the metaphysics of consciousness, i.e. the question of what it is in virtue of which it is true that a certain object is conscious. My answer is: qualia. But suppose the answer was: third-person observable internal processes ABC. If we then discovered other beings with the exact same behavioral outputs (i.e. pain-behavior) but with different internal processes XYZ, then it would follow that these other beings cannot be conscious. But that's absurd and arbitrary, for we could just as well have defined the truth-maker of consciousness attributions in terms of the internal processes XYZ (which would imply that the beings containing ABC cannot be conscious). The proposed solution might be a disjunctive definition of consciousness: internal processes ABC or XYZ. But now we can already see what's going on here: We're accepting *behavior* as our ultimate metaphysical criterion of consciousness after all! For we can ask: Why just ABC or XYZ and not DEF and GHI as well? (Suppose DEF is what goes on in stars and GHI what goes on in my cell phone.) Well: behavior!
Now let's jump to ethics. Let's set aside the foundational issues and the question of realism and just note that 1) we're both believers in qualia and 2) we're hedonistic (and negative) utilitarians. *Given that*, we *cannot* say that we can "care about what we like" and we cannot make "closeness to our own minds" our basic and ultimate criterion. The only relevant question is this: Where are there qualia, are they negative/painful, and what can we do to make them positive or to end them. That's what we need to find out and that's our terminal/intrinsic value. "Closeness to my own mind" may be helpful epistemically, inductively (and thus only instrumentally/extrinsically) but it *cannot* be what ultimately matters if we assume qualia and hedonistic utilitarianism. So if I somehow acquired the *certain knowledge* that all beings that are behaviorally and internally-observably similar to me are actually zombies and that there are qualia (constituting subjects of experience) residing in stars and cell phones, then "closeness to me" would become *totally* irrelevant, even instrumentally/extrinsically.
My reply:
"I wasn't talking about ethics or what we do or should care about at all. My exclusive focus was the metaphysics of consciousness, i.e. the question of what it is in virtue of which it is true that a certain object is conscious."
Yeah, well in my mind, these are the same question, because there is no metaphysics of consciousness. Consciousness is whatever we define it to be, and the main motivation for delimiting its boundaries is because we care about things that are conscious and don't care about things that aren't.
"Consciousness" is like "tableness" -- it's a concept, a cluster in thingspace, not a separate metaphysical entity. Is a board with only three legs a table? What if it has a hole in the middle? What if someone primarily uses it to sleep on? What if it's cut in half? What if it's made out of cell phones glued together? What if it's just a drawing on a piece of paper? These questions don't have "real" answers. They depend on how we want to delimit the boundaries of what a "table" is. The same is true for consciousness algorithms.
"We're accepting *behavior* as our ultimate metaphysical criterion of consciousness after all!"
Not necessarily. You could say that because a table can be made of wood or metal, "tableness" should be defined by the disjunction of wood or metal. Then the ultimate definition of what's a table is just whether people use it to put their dinner plates on. But what if people are having a picnic outside and they put their plates on a rock? Is the rock really a table? Some would say no, just based on the "internals" of the object.
We can get all the same confusion that people have over consciousness when we talk about any old concept. There's nothing specially mysterious about consciousness, except the fact that humans don't know very much about it yet.
"*Given that*, we *cannot* say that we can 'care about what we like' and we cannot make 'closeness to our own minds' our basic and ultimate criterion."
We have to make these choices somehow. Hedonistic utilitarianism is underspecified without saying which things count as subjective experience. Suppose we were table-minimizers (aiming to minimize the expected number of tables in the universe). Would we decide to destroy rocks because people might put their picnic plates on them? Or would we say that we know this is a table and we'll focus on eliminating things based on how similar they are to that? Both approaches might be reasonable; there's no objective answer to what's a table and what isn't.
"So if I somehow acquired the *certain knowledge* that all beings that are behaviorally and internally-observably similar to me are actually zombies and that there are qualia (constituting subjects of experience) residing in stars and cell phones, then 'closeness to me' would become *totally* irrelevant, even instrumentally/extrinsically."
There could certainly be discoveries and thought experiments that would shift our intuitions such that we no longer care about brain-like things and instead care about stars and cell phones. But these would be changes in our feelings, not a discovery of a metaphysical property of the world. When we have a Gestalt shift, it's our brain's attitude that changes, not the photons coming off of the page.