Artificial Intelligence vs. Artificial Sentience

Whether it's pushpin, poetry or neither, you can discuss it here.

Artificial Intelligence vs. Artificial Sentience

Postby Darklight on 2014-02-04T23:12:00

Recently I've been thinking about the differences betweeen Preference and Hedonistic Utilitarianism. One of the notable differences I've realized is in relation to the fact that there are some agents in the world which it can be argued have preferences, but don't feel pain or pleasure at any level, namely, some existing Artificial Intelligence paradigms that use things like Decision Trees or Goal-Directed Algorithms like Constraint Satisfaction Problems or Expert Systems.

So let me ask perhaps a strange question. Does Opportunity, or some hypothetically independent A.I. rover driving around on Mars, have moral worth?

What about Roombas and current generation factory robots? A Roomba can arguably be said to have some preferences to go and clean certain places when it is activated, but does not seem to feel anything in a subjective sense.

My own reaction as someone who leans towards a Hedonistic Utilitarianism, is to say no, these agents, while perhaps showing a little intelligence, do not have moral worth because they don't have internal conscious states or subjective experiences. But I'm honestly not sure about this. How can we be sure that a Roomba doesn't "feel" a very low level frustration at getting stuck?

Because if it really is the case that Intelligence is orthogonal to Sentience, then we may have to rethink our ideas about giving moral worth to A.I. For one thing, we can no longer rely on the Turing Test. A sophisticated enough A.I. algorithm designed to deceive people into thinking it was sentient could look no different from an actual Artificially Sentient machine.

This makes me wonder, what is Sentience? What is it to feel subjective states? What exactly gives rise to a mind?

Does it require some neuronal network that can take inputs, create subjective representations, and output responses? If so, do Artificial Neural Networks have a small amount of sentience?
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein
User avatar
Darklight
 
Posts: 117
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: Artificial Intelligence vs. Artificial Sentience

Postby DanielLC on 2014-02-05T00:32:00

I believe that sentience is a sliding scale. Everything is sentient, but some things are more sentient than others. That doesn't void the question, of course. While it's clear that something that starts doing X more after Y happens while it does X probably enjoys Y, and one that starts doing X less dislikes Y, it's far from obvious how much it likes or dislikes Y compared to some other, very different system. In addition, just because something is neither happy nor sad doesn't mean that it isn't sentient, although from a total hedonistic utilitarian point of view, it still has no moral worth.

I very much doubt that intelligence is orthogonal to sentience, but it's not parallel either. The description I just gave is basically a simplistic intelligence. It would be very hard to make something intelligent that feels very little emotion, but since it is a heuristic, AIXI, which does it the hard way, doesn't seem like it would be very sentient at all.

For what it's worth Eliezer thinks that intelligence does not imply sentience, and he's trying to make an AI that isn't sentient.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Artificial Intelligence vs. Artificial Sentience

Postby Brian Tomasik on 2014-02-07T22:43:00

Good questions, Darklight.

This section of my essay on preference utilitarianism is especially relevant to your post. Further background related to what you asked about are in "Dissolving Confusion about Consciousness" and "Which Computations Do I Care About?" and several other essays in the "Subjective experience" section of my website.

The distinction between hedonic experience and mere non-conscious preference is fuzzy, though it's more clear in the case of just humans where we have a pretty well established dividing line in mind. Depending on how parochial your view of consciousness is, you might care or not about Roombas and the like.

DanielLC wrote:While it's clear that something that starts doing X more after Y happens while it does X probably enjoys Y

Keep in mind that wanting and liking are distinct. I agree that sentience is on a continuum.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Artificial Intelligence vs. Artificial Sentience

Postby Darklight on 2014-02-08T03:05:00

I also agree that sentience is on a continuum.

This section of my essay on preference utilitarianism is especially relevant to your post. Further background related to what you asked about are in "Dissolving Confusion about Consciousness" and "Which Computations Do I Care About?" and several other essays in the "Subjective experience" section of my website.


You've definitely thought these questions through more deeply and thoroughly than I have. Those are some impressive essays.

Though I notice that at times you specify that you prefer Preference Utilitarianism, but at other times in your essays your arguments sound very much like they would apply more to Hedonistic Utilitarianism, in particular the concern towards suffering and qualia in your "Dissolving Confusion about Consciousness" essay and your view that non-conscious algorithms don't matter in the "Which Computations Do I Care About?" essay. And yet in your "Hedonistic vs. Preference Utilitarianism" essay, you seem inclined to care about non-conscious alien agents because they appear to have preferences.

So, I guess the question I have to ask is, are you still leaning towards Preference Utilitarianism? Do you think that suffering is bad because it goes against a conscious entity's preferences, or are preferences secondary to the intrinsic badness of suffering?

If it were possible for a sentient being to prefer to suffer, perhaps because of an extremely strong sense of guilt, and values that desired some kind of penance, would it then be morally acceptable to torture this person? This I guess is an example of the "Perverse Preferences" objection, but without the option to just reverse the sign and uncross the wires as in your artificial mind example.

Why is the satisfaction of preferences morally good, rather than the actual experience of positive emotional/mental states?

What does it mean to hold a belief "twice as strongly" as in your Chris vs. Dorothy example? Aren't beliefs either held or not held? It seems to me like in order for Dorothy to hold that belief more strongly than Chris, Dorothy would have to feel more strongly about that belief, which implies some kind of comparison of emotional valence between Chris and Dorothy.

It also seems to me like Chris and Dorothy's preferences are only really ethical in the context of how it will affect their mental states. If Chris was a non-conscious robot that had a preference to build domino towers because it was programmed to do so, would we assign the same moral worth to these preferences? Inherent in your example is the notion that Chris actually cares about his preference to build domino towers. But what if this preference was just a purely idle, intellectual preference, one that would not effect Chris' emotional or mental state if it wasn't satisfied? He would just go, "oh, Dorothy has stopped me, oh well. I will continue to try to build domino towers because I think it's an ethical imperative. Perhaps I should find a way to stop Dorothy from stopping me so I can accomplish this goal. Killing Dorothy would allow me to build my domino towers unimpeded. As the only ethic that I follow is to build domino towers, there is nothing in my ethics that says I should not kill Dorothy..."

To me this makes it clear that Chris' ethic is not really ethical at all, it's just a value that he holds. Are values/preferences/goals and their satisfaction really by themselves moral content? Or are they morally relevant only in the context of how they affect people's emotional/mental states?

For that matter, why would people's emotional/mental states necessarily be morally relevant? What gives anything moral worth or value? You mention in "Dissolving Confusion about Consciousness" that morality is arbitrary and subscribe to a kind of emotivism. But doesn't that kind of make your morality into just a set of values that you ascribe to, like Chris? I am inclined to believe that your ethics are more ethical than Chris' Domino Tower Ethic. But I can only make that judgment by arguing that morality is grounded in truth, that some values are more moral than others.

Let me attempt to make that argument. I don't know that this argument will succeed, but let's try anyway...

To borrow from Peter Singer, I see morality as being like mathematics. 1 + 1 = 2, and E = MC^2, not because I arbitrarily decide so, but because these symbols represent an underlying reality. Mathematics describes the nature of the objective. Morality describes the nature of the subjective.

Similarly, our sensory, emotional, and mental experiences are real, much like how software is real. Mathematics and software are related. Software implements mathematics, and mathematics describe software. Our mental states are like the software, and morality is like the mathematics that describe the software. Thus, when a mental state feels good, it is morally good intrinsically.

A preference is not a mental state, but a desire for a world state. While it's satisfaction usually leads to a positive mental state, it is not necessarily so. Thus, preferences are only morally good in so far as their satisfaction creates positive mental states, that is to say, instrumentally. Values also, are not a mental state, but a set of ideas that we place importance on, usually because we expect them to lead to positive mental states (though we may not realize this). Thus values can be evaluated in terms of how well they actually produce positive mental states. We can say that values that are conducive to producing positive mental states are good.

This argument is for hedonistic utilitarianism. Now, let's try to extend it further, just for fun. Perhaps mathematics actually describes the nature of objects, and morality describes the nature of subjects. In this case, rather than morality just being about subjective experience, morality is now about the objective state of subjects. We can then say that morality describes not only a mental state feeling good being good, but that the subject itself can exist in a "good state" objectively. I will use the term "Eudaimonia" to describe this state. Eudaimonia includes the positive mental state of the subject, but it also includes facts about the subject in relation to the objective world. These facts are morally relevant in so far as they would create a positive or negative mental state. So for instance, facts that the subject is unaware of, that if they are made aware of, would effect their mental state, would be morally relevant because they affect one's objective state. For instance, if one had terminal cancer, and didn't know yet, it would still be bad.

I don't know that this last argument makes any sense, but it was worth a shot.

One last question. Is it moral to throw a surprise birthday party that will make someone very happy, even if it thwarts her preference to attend the transition meeting that she was expecting to go to, when the "transition meeting" was just an excuse to get her to arrive at the location of the surprise party?
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein
User avatar
Darklight
 
Posts: 117
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: Artificial Intelligence vs. Artificial Sentience

Postby Brian Tomasik on 2014-02-08T08:50:00

Darklight wrote:You've definitely thought these questions through more deeply and thoroughly than I have. Those are some impressive essays.

Thanks. :)

Darklight wrote:Though I notice that at times you specify that you prefer Preference Utilitarianism, but at other times in your essays your arguments sound very much like they would apply more to Hedonistic Utilitarianism

Haha. :) This is the problem with changing your views -- your writings get stale! In practice I fall somewhere between a hedonistic (HU) and preference (PU) view. My PU is sometimes similar to HU, as in the example from the "Hedonistic vs. Preference" piece of a pig that wants to be tortured due to an artificial flipping of its brain's inner signals.

Darklight wrote:Do you think that suffering is bad because it goes against a conscious entity's preferences, or are preferences secondary to the intrinsic badness of suffering?

I still have mixed feelings. Intuitively I feel like the emotion of suffering itself is bad, and preferences don't have much to do with it. But cognitively when I think about other value systems that seem totally wrong to me and then remember the Golden Rule (how would I want them to treat my values?), PU seems more compelling.

Darklight wrote:If it were possible for a sentient being to prefer to suffer, perhaps because of an extremely strong sense of guilt, and values that desired some kind of penance, would it then be morally acceptable to torture this person? This I guess is an example of the "Perverse Preferences" objection, but without the option to just reverse the sign and uncross the wires as in your artificial mind example.

This case is harder because the mind isn't rigged. If the guilt is a robust and powerful component of the neural electorate, then presumably it would be right in this case, much as I cringe to think about it. It would be best to avoid creating situations like this, though. We should try to modify people so that they don't feel so much guilt. Also, it's not clear this preference would be stable upon reflection. It may not be an actual idealized preference.

Darklight wrote:Why is the satisfaction of preferences morally good, rather than the actual experience of positive emotional/mental states?

My strongest argument is the Golden Rule point in the Postscript of my "Hedonistic vs. Preference" piece. What I ultimately want is for my preferences to be satisfied, so that's what I should want for others. I also mention the libertarian intuition that PU better respects personal autonomy (though still not perfectly, because actual preferences are not idealized, might be myopic, might be perverse, etc.). Finally, many people seem to care about things besides hedonic experience, so (non-realist) moral uncertainty plays some role too.

Darklight wrote:What does it mean to hold a belief "twice as strongly" as in your Chris vs. Dorothy example? Aren't beliefs either held or not held? It seems to me like in order for Dorothy to hold that belief more strongly than Chris, Dorothy would have to feel more strongly about that belief, which implies some kind of comparison of emotional valence between Chris and Dorothy.

Different emotional strength is one example of what it could look like to hold a preference more strongly. Then generalize this to other aspects of the cognitive processes that the agent has regarding the preference.

Even explaining what it means for an emotion to be twice as strong is so nontrivial that many economists reject the project as absurd, saying there's no way to make interpersonal comparisons of experiences. I think we have to make these comparisons because ethical choices depend on them, but that doesn't stop them from being arbitrary.

You asked a bunch of questions :), and I should return to other things. I'm busy in the near future, so I might not reply for several weeks due to traveling and moving, unless I fit in some more responses here and there. Feel free to get in touch if you have any burning questions before the rapture.

Your depth of thinking about this topic is impressive.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Artificial Intelligence vs. Artificial Sentience

Postby Darklight on 2014-02-08T18:59:00

You asked a bunch of questions :), and I should return to other things. I'm busy in the near future, so I might not reply for several weeks due to traveling and moving, unless I fit in some more responses here and there. Feel free to get in touch if you have any burning questions before the rapture.


No worries. :) As a Utilitarian, I don't want to get in the way of the more important priorities of a fellow Utilitarian. I mean, I understand you could totally be doing something like saving the world, rather than debating nuances on an Internet forum. XD

Enjoy your travels and stuff! Best of luck with the move! :D

Your depth of thinking about this topic is impressive.


Thanks! It's been something of an obsession of mine to figure out what the "right thing to do" is, and I see thinking about and understanding morality to be essential to giving my life the most meaning and value. It's nice to find a forum with people who seem to share at least some of my obsession, and have more or less gotten to the same conclusions that I have.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein
User avatar
Darklight
 
Posts: 117
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: Artificial Intelligence vs. Artificial Sentience

Postby Brian Tomasik on 2014-02-09T00:51:00

No worries. :) I'll reply a tiny bit more now and maybe slowly work my way to the end of your long post.

Darklight wrote:It also seems to me like Chris and Dorothy's preferences are only really ethical in the context of how it will affect their mental states. If Chris was a non-conscious robot that had a preference to build domino towers because it was programmed to do so, would we assign the same moral worth to these preferences?

You're also a (complex) robot that has been programmed (by evolution and development) to do certain things. As you suggest, the crucial distinction is consciousness. I agree there are differences in extent of consciousness and that those differences are morally relevant, but those are differences of degree rather than kind. This discussion of "suffering subroutines" helps illustrate why consciousness is more pervasive than it might seem.

Darklight wrote:But what if this preference was just a purely idle, intellectual preference, one that would not effect Chris' emotional or mental state if it wasn't satisfied?

That could be one important component of what we mean by talking about how strongly he holds the preference.

Darklight wrote:He would just go, "oh, Dorothy has stopped me, oh well. I will continue to try to build domino towers because I think it's an ethical imperative. Perhaps I should find a way to stop Dorothy from stopping me so I can accomplish this goal. Killing Dorothy would allow me to build my domino towers unimpeded. As the only ethic that I follow is to build domino towers, there is nothing in my ethics that says I should not kill Dorothy..."

The behaviors elicited in response to an apparent expected loss in one's utility function could be seen to constitute suffering whether they involve human-style emotions ("Oh shit! That hurts. :cry: ") or a change of plans by a calculating agent. The former elicit more sympathy in us than the latter because they can trigger our mirror-neuron systems and such. At a more abstract level, it's less clear they're fundamentally different. I have mixed feelings here. Obviously my emotions go for the human-style emotions. But what would I want another agent who doesn't have human-style emotions to do? Would I want him to sympathize with his own kind and therefore ignore human-style emotions that are meaningless to him? Or would I want him to respect my preference just because it's a preference, regardless of whether he has robotic sympathies for it?

BTW, Chris may find it advantageous to compromise with Dorothy if either of them is risk-averse.

Darklight wrote:To me this makes it clear that Chris' ethic is not really ethical at all, it's just a value that he holds. Are values/preferences/goals and their satisfaction really by themselves moral content? Or are they morally relevant only in the context of how they affect people's emotional/mental states?

Yeah, this is the hedonistic vs. preference question.

Darklight wrote:For that matter, why would people's emotional/mental states necessarily be morally relevant? What gives anything moral worth or value? You mention in "Dissolving Confusion about Consciousness" that morality is arbitrary and subscribe to a kind of emotivism. But doesn't that kind of make your morality into just a set of values that you ascribe to, like Chris?

Yes.

More later. :)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Artificial Intelligence vs. Artificial Sentience

Postby Darklight on 2014-02-09T17:28:00

I just want to say that this is a long and all things considered fairly trivial post, and remind you that if you have more important things to do than read and respond, you should definitely do those first. :P

I still have mixed feelings. Intuitively I feel like the emotion of suffering itself is bad, and preferences don't have much to do with it. But cognitively when I think about other value systems that seem totally wrong to me and then remember the Golden Rule (how would I want them to treat my values?), PU seems more compelling.


As a Christian Agnostic, I'm sympathetic to the Golden Rule, but to be intellectually honest, I have to wonder whether the Golden Rule is necessarily anything more than a very useful heuristic.

This case is harder because the mind isn't rigged. If the guilt is a robust and powerful component of the neural electorate, then presumably it would be right in this case, much as I cringe to think about it. It would be best to avoid creating situations like this, though. We should try to modify people so that they don't feel so much guilt. Also, it's not clear this preference would be stable upon reflection. It may not be an actual idealized preference.


I assume that it is really an actual idealized preference, that for instance, perhaps the person committed some grave crime that involved torturing another and strongly believes in justice and the notion that the punishment should fit the crime. Let's also assume that the person will not gain any pleasure from having his guilt sated, because he feels like he can never truly atone (perhaps because the other who was tortured died of his wounds or something). My own view is that even if the person feels that being tortured is justified, this does not make it correct to torture him. To me, there is something inherently wrong with inflicting suffering, that can only be justified if the suffering leads to more happiness later (for instance, exercise).

My strongest argument is the Golden Rule point in the Postscript of my "Hedonistic vs. Preference" piece. What I ultimately want is for my preferences to be satisfied, so that's what I should want for others. I also mention the libertarian intuition that PU better respects personal autonomy (though still not perfectly, because actual preferences are not idealized, might be myopic, might be perverse, etc.). Finally, many people seem to care about things besides hedonic experience, so (non-realist) moral uncertainty plays some role too.


Is personal autonomy in and of itself good? I am inclined to view it as something that reliably achieves the good, but isn't by itself worthwhile. Otherwise we could argue that freedom is a good, and that any interference is bad.

You're also a (complex) robot that has been programmed (by evolution and development) to do certain things. As you suggest, the crucial distinction is consciousness. I agree there are differences in extent of consciousness and that those differences are morally relevant, but those are differences of degree rather than kind. This discussion of "suffering subroutines" helps illustrate why consciousness is more pervasive than it might seem.


True. And interesting.

The behaviors elicited in response to an apparent expected loss in one's utility function could be seen to constitute suffering whether they involve human-style emotions ("Oh shit! That hurts. :cry: ") or a change of plans by a calculating agent. The former elicit more sympathy in us than the latter because they can trigger our mirror-neuron systems and such. At a more abstract level, it's less clear they're fundamentally different. I have mixed feelings here. Obviously my emotions go for the human-style emotions. But what would I want another agent who doesn't have human-style emotions to do? Would I want him to sympathize with his own kind and therefore ignore human-style emotions that are meaningless to him? Or would I want him to respect my preference just because it's a preference, regardless of whether he has robotic sympathies for it?


Well, if both actually constitute a form of suffering, it's arguable that both are bad, regardless of preferences, and that we should get to the bottom of this question of whether or not expected loss in one's utility function is a negative experience to the agent. Respecting preferences seems like a very good heuristic to follow in the meantime though.

The problem I have with preferences is mostly that they are very arbitrary and prone to conflict. It's very easy to hold diametrically opposing preferences, such as in a zero-sum game, and it's not clear how we should go about resolving such conflicts.

Also, take the overused example of 1000 sadists who want to torture a child. While this problem is challenging to both hedonistic and preference utilitarians, at the very least, the hedonists can make an argument that this isn't actually utility maximizing, and that what we should do is teach the 1000 sadists to be happy in non-sadistic ways, wirehead them, or have them play a child torture video game so that no child is actually tortured. But to a preference utilitarian, the preferences of the sadists are specific and can't be changed, and for some reason are morally valuable in and of themselves.

I admit that I would want my preferences to be satisfied, but I am inclined to consider this a bias of being a goal-directed entity. As a thought experiment, I have often wondered what it would be like to have no desires, preferences or values at all. Since these things are arguably programmed into me by evolution and emotions, they aren't really the autonomous choice of the self, but external forces controlling me. But without them, I find that there is no real reason for acting or doing anything. I want to exist because I have emotions that make me want to exist. Pure reason alone can give no real purpose or answer the question of the meaning of life. There has to be some things that we value. And what I find is that what we value regardless of our efforts to ignore what we value, are what we feel. We feel regardless of what we think.

While it's arguable that everything is deterministic, and that all values are therefore forced upon us without choice, I still like to differentiate between the values that seem absolute or required, and the values that seem relative or optional. Absolute or required values are those that are forced on us by our state of being, by our feelings. Relative or optional values are those that we have some capacity to choose. I can choose to prefer one state of the world over another, but I cannot choose to not feel suffering when it happens. Thus, some preferences are arbitrary, while others are vivid and actual.

I guess, perhaps, all other things being equal, the satisfaction of preferences is better than the opposite, that success is better or more often correct than failure. But this correctness needs to be grounded somehow, and I think that the correctness of a preference or goal comes from how it best accomplishes what is right. If success meant destroying the universe, or intentionally creating wrongness, then it wouldn't be correct. Conversely, happiness is simply correct. It is the state that sentient entities should be in because happiness is absolutely valued rather than optional. Happiness > Suffering. Happiness could conceivably lead to bad consequences if for instance, it became associated with sadism. But the happiness itself, even of a sadist, seems to me to be good or correct, and what is wrong rather is that the way in which it is achieved involves badness.

I don't on the other hand, think that Success > Failure, without reservation. The rightness of success and failure is goal-dependent. Happiness and suffering are goal independent. Happiness is often associated with goals because it is an emotional goal state that we often desire, and because accomplishing goals usually leads to happiness, but as an experience, happiness does not actually depend on goals being satisfied. We can be happy simply because we feel so. For instance, a surprise gift from a stranger might make us very happy, even though no goals or preferences were satisfied. Similarly, while people often suffer when they fail at a goal, they also can suffer just because someone decided to out of the blue attack them.

It perhaps can be argued that we have implicit preferences to have good surprises or avoid surprise pain, but then we have to assign a myriad of preferences to people that they don't consciously hold. At which point we are conjecturing about people's true or ideal preferences rather than considering their manifest preferences. To me, the problem with this is that "true preferences" don't actually exist, but are purely an estimate of what a person would think given relevant information and sufficient rationality. This seems exactly as paternalistic as hedonistic utilitarianism, because we are essentially saying that we can know better than the person themselves. Thus, the whole argument that preference utilitarianism respects autonomy depends on accepting manifest preferences.

Just some thoughts.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein
User avatar
Darklight
 
Posts: 117
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada


Return to General discussion