Utilitronium shockwave

Whether it's pushpin, poetry or neither, you can discuss it here.

If you had to answer the genie now, would you ask for a utilitronium shockwave?

Yes
22
71%
No
3
10%
Not sure
6
19%
 
Total votes: 31

Utilitronium shockwave

Postby Brian Tomasik on 2011-12-11T06:18:00

From "The Singularity and Machine Ethics":
Suppose an unstoppably powerful genie appears to you and announces that it will return in fifty years. Upon its return, you will be required to supply it with a set of consistent moral principles which it will then enforce with great precision throughout the universe. For example, if you supply the genie with hedonistic utilitarianism, it will maximize pleasure by harvesting all available resources and using them to tile the universe with identical copies of the smallest possible mind, each copy of which will experience an endless loop of the most pleasurable experience possible.

(We've discussed this several times on Felicifia.)

How many people support vs. oppose a utilitronium shockwave? On a rational level? On a visceral level? You might say you would delay it until we learn more, in case there's something better that we haven't yet discovered. But what would you do if you had to give the genie your answer now?

I strongly support a utilitronium shockwave on a visceral level. This has been true ever since I heard about the idea 6 years ago.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Utilitronium shockwave

Postby Gedusa on 2011-12-11T13:15:00

That's another paper for my "to-read" list....

I'm currently the sole "NO" here (Edit: Still the sole NO. The majority is against me. Hmmm). Some people will know my reasoning, but for those of you who don't:

I'm not a realist about metaethics. If you put a gun to my head and forced me to narrow that down, I'd say I was an emotivist, with some caveats. I have lots of random intuitions about what I should be doing, and I try to bring those to some sort of reflective equilibrium (I think that's one of the areas in which I depart from normal emotivists). My intuitions do not collapse down into: "prevent suffering, cause happiness". They probably don't even collapse into: "maximize the satisfaction of preferences". They don't even collapse down into perfect altruism - no matter how I try to force them.
Hence, I remain largely (>50%) selfish. I don't care as much about animals as I do humans. I care about people close to me more than I care about strangers. Of course, I still care, and I can do math - so I still want to stop wild animal suffering etc... And of course I dance a fine line between what's in accordance with my ethics and what's just needed for me to be psychologically healthy...

But anyway, that's a long winded way of saying: "I'm not a utilitarian in the pure sense, therefore I don't endorse utilitronium shockwaves as there are configurations of the universe I would regard as having higher value than that."

So no. I don't endorse it on a visceral or rational level. If presented with this genie, I would work really hard (or pay other people to work really hard) on a way of getting my intuitions into a coherent state - e.g. by getting an Oracle AI to take my brain state and cohere it or something. If the genie said I had ten minutes to answer - I'd probably reel off something about the current preferences of all beings which have preferences, with weighting toward the strength of those preferences and completely banning torture and some other stuff - though that would probably go horribly wrong.

Alan: I seem to recall you're an emotivist as well? I struggle to understand how humans, acting only on their own intuitions, can end up endorsing utilitronium shockwaves :) Can you give me a run-down of what you think the factors that lead your intuitions in this direction were? I kinda get how realists about metaethics might like it, but - bleh!
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Utilitronium shockwave

Postby Hedonic Treader on 2011-12-11T14:11:00

I voted yes, given that the answer is supposed to be immediate and a utilitronium shockwave is relatively well-defined and a significantly better outcome than any outcomes we can realistically expect from the actual future(s) following after you read this.

However, if given the option to ask for details and elaborate on them, the focus on the simplest minds possible would be replaced with a focus on sentient complexity, ideally allowing for sapience and the experience of self-determination, while maintaining the "spreading exponentially", "free from (involunatry) suffering" and "high hedonistic quality" aspects.

If pressed for priorities in a trade-off, increasing success probabilities in preventing large-scale suffering comes first, then allowing for pleasure, then complexity of experiential modes, then sapience and self-determination.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Postby Brian Tomasik on 2011-12-11T22:27:00

Thanks for the votes. :)

Hedonic Treader wrote:If pressed for priorities in a trade-off, increasing success probabilities in preventing large-scale suffering comes first, then allowing for pleasure, then complexity of experiential modes, then sapience and self-determination.

Cool. I'm with you on the first two but indifferent on the second two. I don't care about complexity or sapience, except to the extent that they're instrumentally useful for reducing suffering and creating happiness.

Gedusa wrote:I'm not a realist about metaethics. If you put a gun to my head and forced me to narrow that down, I'd say I was an emotivist, with some caveats. I have lots of random intuitions about what I should be doing

Yep, same here. :) However, it usually happens that those feelings align in the direction of "preventing lots of suffering with some much-lower priority for creating happiness."

Gedusa wrote:They don't even collapse down into perfect altruism - no matter how I try to force them.

You're not alone. I'm far from a perfect altruist due to ordinary human weakness. However, if I could create an AI-Alan to replace myself, I probably would make it a perfect altruist.

Gedusa wrote:Alan: I seem to recall you're an emotivist as well? I struggle to understand how humans, acting only on their own intuitions, can end up endorsing utilitronium shockwaves :) Can you give me a run-down of what you think the factors that lead your intuitions in this direction were? I kinda get how realists about metaethics might like it, but - bleh!

:)

I think part of the divergence comes from how we imagine utilitronium. Think about some of your favorite experiences -- say, seeing your best friend after being away for two years. What makes that moment feel so good? The subjective experience of goodness is created by certain brain circuits firing in a particular fashion. What's important is not the actual presence of your friend but the way you feel about it. For example, I could in principle rewire your brain so that you would find your friend repulsive and painful.

Utilitronium could be seen as a limit of the process of making simulated happiness more and more efficient. For instance, you'd probably like the thought of friends in a virtual-reality environment meeting and enjoying each other's company. But the simulation of their environment isn't really so important, so why not just simulate their bodies and brains? Well, their foot bones and lung movements aren't so important either, so maybe just simulate their minds. But their audio processing and smell perception aren't so important, so why not just simulate the portions of their brains that create the feeling of enjoyment? And then multiply those circuits across the universe. Somewhere along that chain, maybe you decline to go further?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Utilitronium shockwave

Postby Gedusa on 2011-12-12T00:54:00

@ Alan:
Hmm. Creating an AI to replace me with whatever values I wanted is an interesting thought-experiment for what I actually value. I'll have to think about it some more. I think it would still be selfish though. I think we differ in this respect about what values we reject: you would reject the selfish ones based on your collapsed down altruistic values. I wouldn't.

I think I see more clearly where you're coming from now but I don't fully agree (not that that matters to two emotivists!)

I value the physical presence of the friend in the sense that I value them being there virtually and in the physical world equally, but I don't value things as much (if at all) if you just stimulated the brain parts involved in feeling happy at the friend's presence. I think I can collapse this down to: I don't like the idea of wireheading/being in the experience machine. So my intuitions are that if my actions aren't affecting anything in reality then the actions are less worthwhile (though still worthwhile, I like dreams). Hmm. Saying that it's worthwhile at all leaves me open to simulating my mind/environment if there is a certain differential between the utility of the real world and the stimulated one.

But yeah there is a point along that chain where I'd decide to stop. Probably the point between simulating their whole minds and simulating just the bits which create pleasure. That screams "fake" and "not-human" which I instantly assign lower utility too. But surprisingly progression along the chain also gives the whole thing steadily less value. Also: Universes that simple aren't in accordance with my values. There has to be a certain amount of complexity of the type humans/I like for the universe to count as Fun and Worthwhile.

And I do have a vulnerability here. I don't assign utilitronium zero value. And there is less resource cost to utilitronium than to more complex things which I value more. So it's plausible that I could desire a utilitronium universe as there would be more total value due to much lower resource costs. It would depend on the relative resource costs and what value I eventually assigned various outcomes.
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Utilitronium shockwave

Postby DanielLC on 2011-12-12T01:37:00

Exactly how much do you have to specify? I suspect that minds have to be different to count as separate minds, but I don't know how different. I also don't fully understand what happiness is. If all I have to say is "hedonistic utilitarianism", I'd go for it, but otherwise I'm not so sure.

On a visceral level, I can accept it if I just focus on how happy it will be. If I just imagine chunks of utilitronium it doesn't seem so nice, but that's not really what's going on.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitronium shockwave

Postby Brian Tomasik on 2011-12-12T04:46:00

Gedusa wrote:I think I see more clearly where you're coming from now but I don't fully agree (not that that matters to two emotivists!)

:)

Gedusa wrote:I value the physical presence of the friend in the sense that I value them being there virtually and in the physical world equally, but I don't value things as much (if at all) if you just stimulated the brain parts involved in feeling happy at the friend's presence.

Interesting. So you still might favor a bare-bones simulation of their surrounding environment in order to make the experience "real" but to reduce the computing load as much as possible?

Gedusa wrote:Also: Universes that simple aren't in accordance with my values. There has to be a certain amount of complexity of the type humans/I like for the universe to count as Fun and Worthwhile.

I see -- cool. Seems to be a relatively common intuition.

DanielLC wrote:Exactly how much do you have to specify? I suspect that minds have to be different to count as separate minds

I don't share the sentiment, but again, it's one I've heard a few times before.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Utilitronium shockwave

Postby Gedusa on 2011-12-12T15:23:00

Interesting. So you still might favor a bare-bones simulation of their surrounding environment in order to make the experience "real" but to reduce the computing load as much as possible?

Yes.
Seems to be a relatively common intuition.

Yeah I think it's fairly human typical... Possibly that's one the reasons I'm more willing to endorse the human CEV (with tonnes of caveats) than you are.

And I share Daniel's sentiment on different minds. I'm not even sure that (to my values) the whole of the universe being tiled with utilitronium wouldn't be the same as just one speck of dust being utilitronium.
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Utilitronium shockwave

Postby Brian Tomasik on 2011-12-13T09:13:00

Gedusa wrote:
Seems to be a relatively common intuition.

Yeah I think it's fairly human typical... Possibly that's one the reasons I'm more willing to endorse the human CEV (with tonnes of caveats) than you are.

Exactly. Oddballs like me have more to worry about.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Utilitronium shockwave

Postby Arepo on 2011-12-21T19:51:00

Lean towards 'yes' (although viscerally a no), but I'm sticking with 'don't know' for now because I don't think the question is - or easily could be - well defined. I don't know about the motivations or abilities of the genie (a creature notorious in popular mythology for twisting wishes to something the wisher neither expected nor wanted), or even quite what I would be asking him for. It would be something like 'maximise happiness', but I'd rather like to have a good definition of the word 'happiness' before I made such a commitment.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Utilitronium shockwave

Postby DanielLC on 2011-12-21T21:10:00

Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitronium shockwave

Postby Arepo on 2011-12-21T23:26:00

If the genie is defined so blandly as to make the universe perfect, then I guess I say aye, but that seems like a noninteresting question, basically just equivalent to confirming that I'm broadly utilitarian.

Even then, I don't believe in norms, so if I'm right, the input 'what I should wish for' would probably generate a system error.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Utilitronium shockwave

Postby rehoot on 2012-01-09T22:58:00

I voted *no* for many reasons. I personally think that overpopulation is the greatest problem facing society, and I see overpopulation as a negative influence on me personally and on the well-being of other species. I have no reason to believe that constructing a specialized "mind" that likes living in a state of gross overpopulation and lack of diversity is a worthy cause--it necessarily calls for the extinction of ALL current species in the entire universe. For the "created minds" to continue to be "happy," they would need to be created to be ignorant of the past beauty and diversity of the planet or they must be programmed like robots to lack any semblance of free will or critical thought--perhaps to the extent that they live in a fantasy world.

Who would fix the toilettes or sustain the infrastructure to support the artificial fantasies of the pleasure zombies? Perhaps a class of slaves? The one wish wouldn't afford self-replicating robot infrastructure to care for everybody.....

rehoot
 
Posts: 161
Joined: Wed Dec 15, 2010 7:32 pm

Re: Utilitronium shockwave

Postby DanielLC on 2012-01-10T00:53:00

perhaps to the extent that they live in a fantasy world.


What do you mean by "fantasy world"?
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitronium shockwave

Postby Brian Tomasik on 2012-01-10T04:29:00

I don't think utilitronium minds would have much need for toilets. :) But it's true they might require some robots to maintain the infrastructure and do repairs. These robots would be non-sentient, so they would no more be "slaves" than is a Roomba.

The minds could have knowledge of the past if the designers wanted it that way. It's just that the minds would prefer to have the universe be filled with what seems to us dull utilitronium; they could feel good about the loss of diversity. Of course, in practice, the additional computation to support detailed knowledge about the world might be unnecessary overhead -- probably better to have the minds just feel good rather than feel good about anything in particular.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Utilitronium shockwave

Postby Pablo Stafforini on 2012-02-29T17:52:00

I answered 'Yes'. My only caveat would be this. Such a shockwave would be immediate and (presumably) irreversible. Yet if we are not completely certain that hedonistic utilitarianism (in either its classical or negative varieties) is the correct moral theory, we might want to allow ourselves some time for thinking carefully about this question, rather than rushing to a decision which cannot be undone. However, without having devoted much time to the issue, I'm inclined to believe that in light of present risks of extinction, it is preferable to opt for the shockwave immediately than to run the risk of annihilation by delaying the decision. (Interestingly, one reason for working on existential risk reduction is that we need more time to think whether our long-term survival is or isn't morally desirable.)
"‘Méchanique Sociale’ may one day take her place along with ‘Mécanique Celeste’, throned each upon the double-sided height of one maximum principle, the supreme pinnacle of moral as of physical science." -- Francis Ysidro Edgeworth
User avatar
Pablo Stafforini
 
Posts: 177
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford

Re: Utilitronium shockwave

Postby Brian Tomasik on 2012-03-01T08:19:00

Pablo Stafforini wrote:Yet if we are not completely certain that hedonistic utilitarianism (in either its classical or negative varieties) is the correct moral theory, we might want to allow ourselves some time for thinking carefully about this question

Yeah, but being an emotivist, I don't think there is a "correct" moral theory that humanity will necessarily move toward as it becomes smarter. It's even plausible to me that humanity will eventually move away from the things we now hold dear. LadyMorgana and I discussed this topic on another thread.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Utilitronium shockwave

Postby DanielLC on 2012-03-01T17:31:00

It's even plausible to me that humanity will eventually move away from the things we now hold dear.


Isn't that any direction at all? Or do you mean that our future values will be more different from our current values than our current world is?
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitronium shockwave

Postby Pablo Stafforini on 2012-03-01T19:52:00

Alan Dawrst wrote:Yeah, but being an emotivist, I don't think there is a "correct" moral theory that humanity will necessarily move toward as it becomes smarter. It's even plausible to me that humanity will eventually move away from the things we now hold dear. LadyMorgana and I discussed this topic on another thread.

I thought you were going to say that. :-) As a matter of fact, my metaethical views have changed in the past, moving away from moral realism and closer to moral nihilism. This change had the effect of eroding my altruistic concern for a while, but the effect appears to have been short-lived, and currently I'm as disposed to reduce suffering and promote happiness as much as I have ever been (even if I no longer believe that I am under a moral requirement to act in these ways).

Please note however that, as an emotivist, you do think there is a "correct" moral theory, at least under some interpretations of that term. You believe, for instance, that it is wrong to torture animals for fun (in ways that do not produce greater benefits), even if others happen to believe otherwise. The correct moral theory, on your view, is that which expresses your emotional dispositions (suitably weighted, etc.). Moreover, I believe you have written in the past that you were uncertain about various moral questions, such as the weight that you should attach to the relief of suffering versus the promotion of happiness, and the degree to which you care about brain processes that bear but a very distant resemblance to the processes that occur in prototypical instances of suffering. To the degree that further reflection might allow you to clarify your views on these and other matters, you too might want to allow yourself some time before you decide to trigger a utilitronium shockwave.
"‘Méchanique Sociale’ may one day take her place along with ‘Mécanique Celeste’, throned each upon the double-sided height of one maximum principle, the supreme pinnacle of moral as of physical science." -- Francis Ysidro Edgeworth
User avatar
Pablo Stafforini
 
Posts: 177
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford

Re: Utilitronium shockwave

Postby Brian Tomasik on 2012-03-03T08:58:00

DanielLC wrote:Or do you mean that our future values will be more different from our current values than our current world is?

I wasn't very clear. By "we hold dear," I meant "we utilitarians."

Pablo Stafforini wrote:but the effect appears to have been short-lived, and currently I'm as disposed to reduce suffering and promote happiness as much as I have ever been

Awesome!

Pablo Stafforini wrote:Please note however that, as an emotivist, you do think there is a "correct" moral theory, at least under some interpretations of that term. You believe, for instance, that it is wrong to torture animals for fun (in ways that do not produce greater benefits), even if others happen to believe otherwise.

Sure. As you say, it depends how you define "correct." I do still want things to be the way I want things to be!

Pablo Stafforini wrote:To the degree that further reflection might allow you to clarify your views on these and other matters, you too might want to allow yourself some time before you decide to trigger a utilitronium shockwave.

Perhaps, although I'm fairly certain about utilitronium, and these points of moral uncertainty don't have much effect on that question. (BTW, great job remembering my past statements about these things. :))
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Utilitronium shockwave

Postby Jonatas on 2012-03-03T11:55:00

I voted yes, though with certain differences.

I'm not sure the best strategy would be a shockwave, and the name utilitronium resembles a substance, which is likely misleading, though the concept is sound. I see it as more likely that we will rather jump from planet to planet with spaceships and expand in local "islands" limited in size, taking energy and materials from the planets and similar bodies and possibly from nearby stars.

The "utilitronium" will likely consist of rather complex societies, possibly including redundant instances of unlimited intelligence, minds experiencing exquisite combinations of intense good feelings (value production), and insentient working, support and defense machinery.

The minds may be fairly complex rather than the smallest possible.

Jonatas
 
Posts: 4
Joined: Wed Jul 21, 2010 9:35 pm

Re: Utilitronium shockwave

Postby DanielLC on 2012-03-03T20:36:00

taking energy and materials from the planets and similar bodies and possibly from nearby stars.


I'd expect you'd largely take materials from planets and energy from stars.

The minds may be fairly complex rather than the smallest possible.


No matter how complex they are, they'll still just look like ordinary computronium to us.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitronium shockwave

Postby Hedonic Treader on 2012-06-14T17:59:00

Jonatas wrote:I'm not sure the best strategy would be a shockwave, and the name utilitronium resembles a substance, which is likely misleading, though the concept is sound. I see it as more likely that we will rather jump from planet to planet with spaceships and expand in local "islands" limited in size, taking energy and materials from the planets and similar bodies and possibly from nearby stars.

Well, it would be an interstellar colonization wave that spreads as fast as it practically can in practically all directions in which there are reachable resources. "Shockwave" may be misleading, because it sounds destructive and uncomplex, but in a cosmological context, it would be a very fast wave-like transition of how star systems and other cosmic objects are internally organized.

The "utilitronium" will likely consist of rather complex societies, possibly including redundant instances of unlimited intelligence, minds experiencing exquisite combinations of intense good feelings (value production), and insentient working, support and defense machinery.

If you don't have outside competition and the replication algorithm is nun-mutating, you don't need defense machinery. Neither would you have a strong need for complex societies, unless you need to convince the originators who might insist on such complexity (see below).

If there is outside competition and/or the replication algorithm isn't perfectly non-mutating, you'd better prepare for a new era of competitive darwinism and find ways to integrate hedonistic utilitarian values into it. In this thought experiment, it sounds like the genie has this figured out, so it could just efficiently create happy minds that have no other purpose than being happy.

The minds may be fairly complex rather than the smallest possible.

Let's be more precise here. I agree that the descriptor "smallest" is misleading and does not logically flow from assuming hedonistic utilitarianism. There are at least three reasons for that:

1) The smallest mind possible might be so alien to us that what we interpret as its pleasure might not be similar enough to our pleasure to actually count in a satisfying way. Unless we have a thorough formalism as to what qualifies as pleasure, and thoroughly trust that formalism, we should assume some level of epistemic uncertainty regarding minds that are significantly unlike our own.

2) "small" of course can't literally mean physically small, but "most efficient in creating pleasure per resource input". It has some relation to physical smallness, but it's not the same descriptor.

3) When people hear "small minds", they feel associated low social status emotions. This is the same reason why people say things like, "I'd rather be a miserable Socrates than a happy pig", even though it's not clear they value happiness much less than intellectual insight when they make actual choices on how to spend their time. It's also possible that "small minds" sound vulnerable to the outside world, or empoverished in terms of experience, even though the thought experiment implies they would have the best experiences possible.

If you actually want to convince people to build a system, you'd probably go for human-like, complex, noble, free, individualistic but also social, self-determined etc. etc. The slightest slip in association, and people will accuse you of intentionally building a dystopia. If utilitarians in the future ever actually find themselves in a situation where they can launch a non-mutation system that generates certain patterns on a large scale, chances are that they will have to convince a majority of non-utilitarians that these patterns are the best use of resources. Those other decision-makers will value different things and often be selfish, so utilitarians would do good to formalize ways to integrate these values into practical hedonism. I'd expect this to be a lot harder than it sounds.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Postby peterhurford on 2012-07-14T06:08:00

I don't place much value on extremely heightened pleasure (like "wireheading") as what the utilitarian intends to maximize, so my kind of utilitronium would be simulations of utopias where people live fulfilling lives (without being wireheaded). (Though, I also share the intuition that I wouldn't want to be placed in an experience machine, so I'm not sure how that compares.)

However, and even more relevantly, I personally care nothing for bringing entities into existence for the sake of those entities, so I wouldn't want to bring utilitronium into existence, regardless of how happy it is. The fact that it doesn't exist means that it isn't "harmed" by non-existence. (This doesn't mean I don't care about the future though, since people are going to exist inevitably, I want them to come into an existence with happy lives.)

Some of this is intuition is seeing entities not as "utility recipticles" that are brought into existence as a means of maximizing total utility, but rather valuing each entity in itself, and then approaching calculations for how to help existing (and inevitably to-be-existing) entities impartially, thus popping out my brand of utilitarianism.

Thus, I join the "no"s.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Utilitronium shockwave

Postby Brian Tomasik on 2012-07-14T10:48:00

Very interesting, Peter. We disagree on the experience machine, and we disagree on whether organisms are just receptacles for utility.

As far as wireheading itself, the experience needn't be some crude form of pleasure. It could be rich, stimulating lives like the one you're living now, or better. But I take it this wouldn't much change your opinion.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Utilitronium shockwave

Postby Hedonic Treader on 2012-07-14T12:24:00

Also please consider the logical implications of your view. It sounds noble to say sentient beings or persons aren't just "receptacles" but that you want to help the individuals. After all, who likes objectification of people? It sounds like callous immorality.

But the prior-existence condition is both wasteful, and it runs into identity problems.

It is wasteful because it would waste desirable experiences. If someone prevented me from gaining a large amount of free pleasure, all else equal, I would consider this a serious offense. Similarly, if someone could reliably create a hedonium shockwave, all else equal, and arbitrarily decides not to, I would consider this the worst ethical mistake ever made.

Now you could argue that these two examples are not equivalent because I am pre-existing while the minds in the hedonium shockwave are not. But I consider my future selves to not be identical to me either. They are (at least slightly) different, time-local entities. If you were to kill me, then the prior-existence condition for my future selves would not be fulfilled - my future selves don't exist inevitably in a world in which you can kill me. That doesn't mean you can't cause harm by killing me, even if it is painless - it would deprive my future selves, who are not identical to me, of pleasure.

David Benatar also argues that one cannot harm a person by not creating them, but one can harm a person greatly by creating them in a state of suffering. Since all human lives contain at least some suffering, he concludes, any new person is harmed by being brought into existence, even if that person considers his or her own life to be very much worth living. The resulting strong version of antinatalism, while logically consistent and taken seriously by some, is usually rejected as an absurdity by most commentors in the general public, many of which do not feel they were harmed by having been brought into existence and are quite glad to be alive.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Postby peterhurford on 2012-07-20T09:13:00

I changed my vote on this question, reversing my opinion in part from six days ago. Two weeks ago, I was a preference person-affecting utilitarian. Then, about a week ago, I switched to a hedonistic-ish person-affecting utilitarian. Now, I switched away from the person-affecting view to total utilitarian.

Given that total utilitarians would endorse utilitronium shockwaves, I have modified to strongly support utilitronium shockwaves (as actually the utilitarian-best possible thing we could do). I want to clarify here what I changed and why.

~

Preference Satisfaction Just Confuses Me

While not relevant to my change in vote, I did want to highlight my turn away from preference satisfaction utilitarianism. It can be explained in four confusions that all make much more sense with a hedonistic(ish) happiness(ish)-maximizing analysis.

1.) How do I make sense of preferences that I have, but don't want (don't meta-prefer) other people help me satisfy them? I want to quench my thirst all by myself, thank you. For some preferences, it would make me happier if (and I would meta-prefer that) you helped me, but for others it would make me sadder (and I would meta-prefer you refrain).

2.) How do I make sense of the idea of creating a preference for the sake of fulfilling it? Would it be a moral benefit to make you thirsty just to offer you a drink? Doing so would satisfy your preference, but wouldn't make you any happier.

3.) How do I make sense of preference satisfying population ethics? What good would it be, if at all, for someone to be born, who wouldn't be born otherwise? What good would it be, if at all, to launch a utilitronium shockwave? Is this just creating beings to fulfill their preferences?

4.) Is the best life one in which all our preferences are satisfied? Would an ideal lifeform be one with just a single preference that is satisfied? What about a life form with 1000 preferences, all of which are satisfied?

~

Hedonistic-ish :: Haven't Changed on Wireheading

So sign me up for maximizing "happiness", but don't let this happiness be seen as the kind you can get from wireheading. This isn't a lack of imagination that wireheading won't be intensely pleasurable, but a recognition that a lot of the things I derive happiness from are not just my personal mental states. Yes, happiness takes place entirely within my mental states, but I think it's the wrong level of abstraction to suggest that I only care about my mental states.

However, I take utilitronium to be basically a pocket simulated utopia that does contain all the things that make for a good life according to Fun Theory -- authentic high challenge, an opportunity for a meaningful impact on genuinely important problems, with novelty and autonomy, etc. Some extensively pleasurable and non-addictive designer drugs can improve the mood, but shouldn't be the focus of my utopia.

Sure, you could knock out my boredom/novelty-seeking and just repeat my most favorite experience over and over ad naseum. I'd definitely enjoy it. It would probably bring me a lot of happiness. But I desperately wouldn't want you to do that to me; I prefer the life I have, with novelty and boredom. Wireheading may be fun, but it's not Fun. ...for me, anyway.

This is why I currently put the "ish" on hedonistic-ish -- I'm not sure to what degree current hedonistic utilitarians would want to avoid wireheading or experience machines, seeing this as not the kind of "happiness" that matters.

~

Simulations Good, Experience Machines Bad (For Me)

Along the same lines, please don't sign me up for an experience machine, even if I'd never find out about it. I have a very strong my-native-world bias for where I live and act. I don't care if my current world was revealed to be, all this time, a simulation, as long as it's the native-world I grew up in. This is the world that has the family and people that I care about, the strangers suffering who I want to lift up or see lifted up vis-a-vis utilitarianism and other methods.

You could put me in an experience machine, sure. I think such simulations would be "equally real", the people would still matter, these people could genuinely suffer (to the point where creating an experience machine with net suffering would be a utilitarian evil), and there is genuine meaning and opportunities for utopia here. But I want to be with the people of my native universe (even if I wouldn't notice a difference).

When it comes to utilitarianism, I want to be trans-world (I'd eliminate the as much suffering as I can to the best of my ability, no matter which experience machine the suffering is located in), and when it comes to my self-interest non-utilitarian projects, I want them to unfold in my native world. (Though if my native world wanted to voluntarily enter a utopian simulation along with me, this would be ideal... Thus, the Matrix wouldn't be as bad, though I wouldn't want it run by evil robots or done involuntarily.)

Other people may not share this meta-preference bias of mine, but I don't think such a bias is irrational or odd. For these other people, sure, experience machine them (or wirehead them) if that's what they'd wish and what would make them happy. But don't do it to me, because it wouldn't make me happy. Remember, this is why I'm hedonistic-ish.

~

Anti-Natalism? :: Of What Good Is A Possible Life?

So this clarifies my position on hedonism, I hope. But what about the big move from person-affecting, utilitronium-hating utilitarianism to utilitronium-loving total utilitarianism? The problem I ran into here was indeed anti-natalism, or the idea that we should voluntarally end the population now, because in doing so we could make everyone happier (at the expense of all future generations who don't matter in a person-affecting way).

Let me explain. Person-affecting utilitarianism is the view that the only utilities that matter are those that affect people, as in those who currently exist or will exist. If it turns out the person never will exist, they aren't harmed in any way by being denied their existence (according to person-affecting views), and thus there's no reason to create them. This means that if we end humanity, the future generations would never exist, and thus they wouldn't matter, and there would be no reason to continue the human race (or any race at all), as long as the current generation is happiest.

Total utilitarianism takes a much different view to these potential people -- even if they never would have existed, it's best to create them still as long as they would live a happy life, because this would add happiness to the total, and thus move toward maximizing happiness.

Likewise, I think it would break the asymmetry by suggesting that we should create these potential people for their (now existing) sakes -- while they aren't harmed by not existing, they are certainly helped by existing, and I wouldn't want to deny these possible people the benefit of existence (though only for their sake *after* they exist).

Thus, now I give a big yes to utilitronium shockwaves -- bringing this into existence would bring in more happiness than any other potential alternative (I know of). Additionally, it would be giving untold gillions of entities the opportunity to live in an ideal utopia, thus being a huge boon to all of their sakes (after they exist, of course). So I'd want to do the shockwave, for them.

~

Still Say No to Receptacles :: Happiness For the Sake of People, Not People for the Sake of Happiness

But even though I am now a total utilitarian, I still think that some of my fellow utilitarians have the happiness backwards. I suggest that people don't exist for the sake of happiness, so we can rejoice at adding those happy numbers to our calculations and see a bigger number. Instead, we should remember that happiness is good for these people (indeed by definition), and thus we should rejoice at the people living better (happier) lives.

This harkens on back to why I became a utilitarian, before I even knew the philosophy existed -- a commonsense desire to want to help people for their sakes and then knowledge that I was operating under triage and had to maximize my efforts, meaning some people would have to be regretfully neglected for greater benefits to others. Indeed, some people might even regretfully have to be harmed for outweighing (but not cancelling) benefits to others. We have commensurability without fungibility.

Thus people aren't good only for their happiness; happiness isn't this abstract good thing separate from people (or nonhuman animals or what not) living happy lives. And I think this makes lots of sense still on a total utilitarian view, where we remember that the utilitronium shockwave is good not for our total utility calculations but firmly good for the utilitronium itself.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Utilitronium shockwave

Postby peterhurford on 2012-07-20T09:18:00

Hedonic Treader wrote:But I consider my future selves to not be identical to me either. They are (at least slightly) different, time-local entities. If you were to kill me, then the prior-existence condition for my future selves would not be fulfilled - my future selves don't exist inevitably in a world in which you can kill me. That doesn't mean you can't cause harm by killing me, even if it is painless - it would deprive my future selves, who are not identical to me, of pleasure.


I think that's very interesting. I do consider my future selves identical to me, though I do agree they are slightly different, time-local entities. I just think those time-local entities are also me, due to the continuity involved.

For me, the harm in death (even if painless) is not "it would deprive my future selves, who are not identical to me, of pleasure" but rather that "it would deprive my future selves, who are indeed identical to me, of pleasure". It's not really a big difference.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Utilitronium shockwave

Postby Brian Tomasik on 2012-07-22T03:55:00

peterhurford wrote:Given that total utilitarians would endorse utilitronium shockwaves, I have modified to strongly support utilitronium shockwaves (as actually the utilitarian-best possible thing we could do).

Hooray. :)

peterhurford wrote:Preference Satisfaction Just Confuses Me

Me too. In addition to what you listed, there's the issue of what counts as a preference. Am I satisfying an apple's preference to fall if I drop it on Issac's head? If not, we may end up defining preferences in terms of things that satisfy hedonic desires, but in that case, we've just gone back to hedonism anyway.

peterhurford wrote:Sure, you could knock out my boredom/novelty-seeking and just repeat my most favorite experience over and over ad naseum. I'd definitely enjoy it. It would probably bring me a lot of happiness. But I desperately wouldn't want you to do that to me; I prefer the life I have, with novelty and boredom.

Interesting. I disagree, but hopefully it doesn't matter that much if Fun isn't too much more expensive to simulate than fun. (Hard to say without getting into more detail.)

peterhurford wrote:But I want to be with the people of my native universe (even if I wouldn't notice a difference).

Very interesting. Again, I don't share the sentiment myself.

I do care about actually reducing suffering, rather than being deluded into thinking that I have reduced suffering. But from a selfish perspective, it doesn't matter to me at all in what world I find myself having positive emotions.

peterhurford wrote:But even though I am now a total utilitarian, I still think that some of my fellow utilitarians have the happiness backwards.

I'm probably one of them. I do think organisms are made so that they can hold happiness.

But I also wonder how much of our disagreement is just a matter of sentiments attached to different linguistic ways of framing the issue. Certainly I want to make organisms happier for their sakes.

peterhurford wrote:I do consider my future selves identical to me, though I do agree they are slightly different, time-local entities. I just think those time-local entities are also me, due to the continuity involved.

But surely the scale is continuous rather than binary. There's no hard distinction between things that are "you" and "not you." We may fix an arbitrary cutoff point for ease of exposition, but it's not a fundamental fact of the world.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Utilitronium shockwave

Postby Hedonic Treader on 2012-07-23T09:43:00

Thanks for your reflections, Peter. It's interesting (and unfortunately rare) to see people change their minds and be explicit about the reasons.

But I want to be with the people of my native universe (even if I wouldn't notice a difference).

Isn't this logically inconsistent with the rejection of preference utilitarianism? After all, if you wouldn't notice a difference (and no others were worse off), then by definition it can't be worse unless you count meeting such preferences as intrinsically valuable.

I actually don't mind satisfying such preferences for strategic reasons, or if they're free of costs. But a real-world bias can be very costly. Consider the difference between the resource costs of flying real planes for sports, or playing computer games in which players fly planes for sports. Maybe the former should be possible in a world of real humans who won't voluntarily make concessions, and who only agree to be innovative in a market-based society that allows them such waste. But it's easy to see how wasteful this distinction can be, and even now, it's clearly not seen as a human right to fly real planes if you can't afford it.

From a utilitarian perspective, then, I would say choosing the real plane over the simulated one is a mistake. Of course, if you are deceived about the state of your perception and the real world, this may add new threats such as counterfeit utility (you think the world is okay, but you're in an experience machine and in reality, others are suffering) or questionable sustainability or power dynamics. But let's assume we could all migrate into an upload world that simulates - and gradually enhances - our current environment without deceiving us about the real world, while using the physical resources to allow more minds to experience the same advantage, I would see this as clearly and significantly preferable from a utilitarian view.

I think we can converge on a consensus based on two points: 1) You need to convince real people of any plan that is supposed to be sustainable, and if real people insist on maintaining this bias, we need to make the concession strategically. 2) It's better to have some wasteful utopia than a big future filled with suffering, or a "utopian" system that crashes shortly after it starts.

For me, the harm in death (even if painless) is not "it would deprive my future selves, who are not identical to me, of pleasure" but rather that "it would deprive my future selves, who are indeed identical to me, of pleasure".

Fair enough. It depends entirely on how we define "identity", of course. As long as it doesn't make much of a difference in practical decisions, we don't need to converge on this in order to get things done.

But I desperately wouldn't want you to do that to me; I prefer the life I have, with novelty and boredom. Wireheading may be fun, but it's not Fun. ...for me, anyway.

My prediction is that people who are free to design their own experiences will gravitate toward wireheading instead of Fun, even those who now say otherwise. Think how much money and time people spend on having - relatively repetitive - sexual experiences. This is fun, but not Fun. It's just mechanical animalistic idiosyncratic behavior. Yes, there are variations, but let's be honest, the core of the thing is always essentially the same. I think both Fun and mere pleasure can be - and are already being - superstimulated through technology, but I tentatively predict that with increasing capability to sustainably superstimulate both, the pleasure will win out. I expect most free people in a utopia would spend their time on relatively repetitive, and highly pleasurable, activities. Either way, I agree with Brian: If Fun is somewhat pleasurable and free from suffering and not too resource-costly, it's not that big a deal either way. This is another point where we should converge strategically anyway if we're going to convince non-utilitarians: No "dragging people to the pleasure chambers", as David Pearce once put it.

Thus people aren't good only for their happiness; happiness isn't this abstract good thing separate from people (or nonhuman animals or what not) living happy lives.

I think once you drop the prior-existence condition, this semantic distinction seems to no longer have any practical impact, so I don't mind the difference. Calling people "receptacles" does not win them over anyway. :)
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Postby peterhurford on 2012-07-29T22:04:00

Brian Tomasik wrote:Interesting. I disagree, but hopefully it doesn't matter that much if Fun isn't too much more expensive to simulate than fun. (Hard to say without getting into more detail.)


My mind is probably too expensive to simulate. I think these kind of issues can be understood and resolved by breaking down the concept of "happiness", and exploring wireheading / experience machine thought experiments.

~

Brian Tomasik wrote:I do think organisms are made so that they can hold happiness.

But I also wonder how much of our disagreement is just a matter of sentiments attached to different linguistic ways of framing the issue. Certainly I want to make organisms happier for their sakes.


I definitely think it's a linguistic framing thing that's probably trivial. Though I am with Hedonic Treader that people don't like being thought of as receptacles. I do suspect it would make people-as-happiness-receptacles and happiness-for-the-sake-of-people are analytically identical.

~

Hedonic Treader wrote:Isn't this [preference to be with people of my native universe, even if I wouldn't notice the difference] logically inconsistent with the rejection of preference utilitarianism? After all, if you wouldn't notice a difference (and no others were worse off), then by definition it can't be worse unless you count meeting such preferences as intrinsically valuable.


Yes, it likely is logically inconsistent, but it's not a personal intuition I yet want to jettison, even if it may ultimately be metaphysically confused or confused for a different reason. I suspect that further understanding the nature of "happiness" will either (a1) make sense of this preference, (a2) provide a direct rationale for satisfying it, and (a3) unifying happiness and preference approaches; or (b1) conclusively demonstrate that preference and happiness approaches are distinct, (b2) conclusively demonstrate the superiority of happiness approaches, and (b3) provide a reason to completely ignore this native-world preference.

~

Hedonic Treader wrote:it's easy to see how wasteful this distinction can be, and even now, it's clearly not seen as a human right to fly real planes if you can't afford it.


To be fair, I did say I wouldn't mind being in a simulated world provided my native-world associates were put into the simulation along with me.

~

Hedonic Treader wrote:It depends entirely on how we define "identity", of course. As long as it doesn't make much of a difference in practical decisions, we don't need to converge on this in order to get things done.


Indeed, I agree here. Though I do like working out philosophy of identity just for the fun of it.

~

Hedonic Treader wrote:My prediction is that people who are free to design their own experiences will gravitate toward wireheading instead of Fun, even those who now say otherwise. Think how much money and time people spend on having - relatively repetitive - sexual experiences.


Perhaps. My prediction is that people want both hedonia (pure pleasure; "liking") and eudamonia (connection to genuineness, meaning, purpose; "approval"), but different people want one or the other to different degrees. Those who are deep into hedonia would want wireheading whereas those who are deep into eudamonia would not want it, or think it abhorrent.

And as far as I can tell, directly stimulating eudamonia is possible but self-defeating on some level, equivalent to self-deception. It would be like taking a utilitarian and deceiving them into thinking they are reducing tons of suffering -- it's just not the point.

Thus fun would be hedonia, and Fun would be hedonia + eudamonia. I bet you can simulate minds that don't require or care for eudamonia, but I suspect my kind of utilitarianism would rebel against that.

Further reflection on my intuitions, further philosophical developments in utilitarianism, and further scientific developments into understanding happiness/well-being/etc. are necessary, I think, before this problem can be resolved. I'd be cautious against wireheading before we know better, though we definitely can't preclude it as a possibility or rule it out completely.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Utilitronium shockwave

Postby Hedonic Treader on 2012-07-30T22:16:00

To be fair, I did say I wouldn't mind being in a simulated world provided my native-world associates were put into the simulation along with me.

I see. So the real-world bias is more about consistency of social connections than about a bias for physically embodied interaction. I sometimes get the latter. It also sometimes comes down to epistemic purity, i.e. not being deceived about the nature of the physical context one is in. I can certainly empathize with that, if only for strategic reasons (a simulated mind is a sitting duck for any enemy who has superior power and knowledge over the physical context of the implementation).
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Postby peterhurford on 2012-07-30T22:46:00

Hedonic Treader wrote:It also sometimes comes down to epistemic purity, i.e. not being deceived about the nature of the physical context one is in.


I also wouldn't want to be deceived about my physical context. If I'm to be simulated, not only do I (1) want to be simulated alongside a share of my current social contacts, but I also want (2) to be informed and consent to that happening.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Utilitronium shockwave

Postby Brian Tomasik on 2012-08-05T05:41:00

Hedonic Treader wrote:I think we can converge on a consensus based on two points: 1) You need to convince real people of any plan that is supposed to be sustainable, and if real people insist on maintaining this bias, we need to make the concession strategically.

Unless someone creates an AI in the basement that takes over the galaxy without asking permission.

Hedonic Treader wrote:2) It's better to have some wasteful utopia than a big future filled with suffering

Yes.

Hedonic Treader wrote:Calling people "receptacles" does not win them over anyway. :)

True. But I, for one, am proud to be a receptacle. ;)

peterhurford wrote:My prediction is that people want both hedonia (pure pleasure; "liking") and eudamonia (connection to genuineness, meaning, purpose; "approval")

Yes, but fundamentally, genuineness and meaning are not different from raw pleasure. They're still all feelings that hedonistic utilitarians care about, and they can still all be wireheaded.

In other words: The feeling of genuineness can be faked. :)

I see you already noted this...
peterhurford wrote:And as far as I can tell, directly stimulating eudamonia is possible but self-defeating on some level, equivalent to self-deception. It would be like taking a utilitarian and deceiving them into thinking they are reducing tons of suffering -- it's just not the point.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Utilitronium shockwave

Postby peterhurford on 2012-08-05T06:39:00

Brian Tomasik wrote:In other words: The feeling of genuineness can be faked. :)


That genuinely scares me. Luckily I'll be long dead before we have a utopia of minds that derive infinite utility merely from having elbows.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Utilitronium shockwave

Postby Brian Tomasik on 2012-08-05T07:11:00

peterhurford wrote:Luckily I'll be long dead before we have a utopia of minds that derive infinite utility merely from having elbows.

Hey, are you being down on Kermit?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Utilitronium shockwave

Postby Hedonic Treader on 2012-08-05T11:26:00

Brian Tomasik wrote:Unless someone creates an AI in the basement that takes over the galaxy without asking permission.

Yes, but the probability of that happening seems rather low. I'd be more concerned about this if we already had almost-human level AI and a lot of different factions working on an arms race of drastic secret improvements. But even then it's not clear a hard takeoff (followed by world domination) is that likely. You'd expect the rest of the world band together against any first-mover.

In other words: The feeling of genuineness can be faked.

This scares me too, but presumably for a different reason than it scares Peter. I wouldn't intrinsically care about whether my own experience of genuineness is fake. But the practical consequences on epistemic sanity and consequently the ability to affect my life and the world in desirable ways could be drastic. I watched a Thomas Metzinger lecture yesterday in which he mentioned two psychiatric patients staring out the window, one of them (genuinely) believing he was causing the sun's movement with his will, the other one believing he was controlling cars and pedestrians. Irregardless of their state of subjective well-being, there is a reason such people can't live on their own. They would also have a hard time organizing any kind of defense against any hostile force.

In a way, it's quite clear that a lot of our naive realism is already fake genuineness. Your body image is just your brain's working model of your body, even though it feels real to you. This is already true and inevitable. At least we can say evolution has created a good enough capacity to model oneself and the world for fitness functions. But since there's a tradeoff between detailed genuineness (precise models) and cognitive overhead, we would expect it to be relatively streamlined for these functions (though not absolutely).

This is one of the reasons we might not want to care about genuineness too much except for instrumental reasons: In a way, most of the things we care about have always had strong elements of fake genuineness (naive realism, changing constructions of identity, retrospective distortions etc.)
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Postby peterhurford on 2012-08-05T17:21:00

Hedonic Treader wrote:This scares me too, but presumably for a different reason than it scares Peter. [...] This is one of the reasons we might not want to care about genuineness too much except for instrumental reasons: In a way, most of the things we care about have always had strong elements of fake genuineness (naive realism, changing constructions of identity, retrospective distortions etc.)


I disagree with you guys because I still have a strong intuition about caring about getting things done; wanting to do actual things and enjoy them, not just bliss out. I disagree with you guys on Molly the Mathematician. I mean, I like blissing out on occasion, but I wouldn't want it to become my entire life because I want to do actual things and have accomplishments that aren't just me being deceived into feeling accomplished or having the feelings of accomplishment. I don't think that's naive.

It's really weird thinking about it. I'm interested in figuring out the basis for this intuition.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Utilitronium shockwave

Postby Hedonic Treader on 2012-08-05T19:13:00

Peter, I think the basis for your intuition is an evolved instinct about the instrumental downsides I mentioned above; a wariness of being deceived, especially in a world of hostile others, or maybe a wariness of being wrong about important life-sustaining aspects.

There can be a conflict of epistemic states and hedonistic states, e.g. it is possible that one feels better not knowing something, while at the same time feeling bad about the state of not knowing. I sometimes play games with a commitment not to reload old save games in case my character dies. This adds suspense because I can lose the in-game progress of many hours if I step into a trap or lose one battle. I would not want my computer to cheat in my favor, and I would want to know if it did. I also follow the commitment in case I lose. However, it feels much better to win than to lose. The point here is, given my current psychology, knowing that the program cheats to let me win would take the suspense and sense of in-game progress away, which is the core hedonistic element of the game. Since the game doesn't have any instrumental value for the real world - it's just entertainment - I would not mind being attached to a "suspense and sense of progress" machine instead. But since I don't have such a machine, I care about following the rules and my epistemic state about following the rules, because my enjoyment depends on it.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Postby peterhurford on 2012-08-05T19:38:00

Hedonic Treader wrote:Since the game doesn't have any instrumental value for the real world - it's just entertainment - I would not mind being attached to a "suspense and sense of progress" machine instead.


I think that's where we differ -- I want to place value not just in the suspense and sense of progress, but in the actually playing and winning the damn game. I don't want to just feel like I win; I want to actually win. Going back to Molly the Mathematician, I want to have actually discovered the proof, even if I die not knowing I had done so. I wouldn't want to just think that I did.

And I don't think it's purely an epistemic thing, like knowing that I was deceived is the problem, for I wouldn't want to be deceived even if I would never find out about it.

I think it could be possible that we differ in our meta-preferences -- just because I don't want to wirehead doesn't mean that it wouldn't be good for you to do so. One thing I'm curious about is that I actually have a very high life satisfaction, probably do to (what I guess is) an abnormally high happiness set point. I wouldn't mind having my happiness set point artificially raised, as long as I still got to live my life as I do (or a utopian equivalent).

Overall, I just think that happiness / well-being / flourishing / fulfillment are all poorly understood. They're understood well enough to ground an intuitive-level utilitarianism for everyday life, but not good enough to adequately resolve thought experiments about utopias.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Utilitronium shockwave

Postby peterhurford on 2012-08-05T19:59:00

Something like "Not for the Sake of Happiness (Alone)" does a good job of catching my intuitions here.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Utilitronium shockwave

Postby Hedonic Treader on 2012-08-05T21:06:00

peterhurford wrote:Something like "Not for the Sake of Happiness (Alone)" does a good job of catching my intuitions here.

Yes, I know the post, and I disagree with Eliezer's value judgments (and consequently yours).

I suspect there's no way to settle it other than to disagree on the terminal values. I'll also point out that I suspect the pure hedonistc utility concept is probably simpler to formalize and therefore to implement in any formal goal-driven algorithm. And I think that if I had your preferences, I'd probably be more neurotic about the limitations of naive realism, illusions of agency etc., in our current evolved minds, beyond their practical implications.

But hey, we can still far more easily agree on a common utopia compromise than we could, say, with fundamentalists who want to see hellfire. :)
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Postby peterhurford on 2012-08-05T21:45:00

Hedonic Treader wrote:I suspect there's no way to settle it other than to disagree on the terminal values. [...] But hey, we can still far more easily agree on a common utopia compromise than we could, say, with fundamentalists who want to see hellfire.


Fair enough. For the time being, though, it seems that the goals of our two slightly different utilitarianisms do converge.

~

Hedonic Treader wrote:I'll also point out that I suspect the pure hedonistc utility concept is probably simpler to formalize and therefore to implement in any formal goal-driven algorithm.


Right. I've already conceded that my mind is very expensive to simulate, relative to those possible minds who care only for hedonia. While I would beg the utopia creators to the best of my ability not to usher in a utopia of only hedonia (or Brian's "fake genuineness"), I'm not even sure I would be overly troubled should I fail. I imagine I would just choose to die or be regretfully wireheaded (I suppose infinite pointless pleasure is still preferable to death), having had a good run. There are many worse scenarios that could have resulted.

I do hope some semblance of this intuition, or at least a satisfying explanation for why our intuitions differ, will be vindicated by future research in neuroscience. And I suggest that you also advocate for future "what is happiness, really?" research to, lest we accidentally get the utopia entirely wrong.

...Though I suppose you could just give me the illusion of having received a satisfying explanation, a pseudoscientific gobblygook that my brain is altered to interpret as a satisfying explanation, and there wouldn't be any difference worth caring about, right? ;)

~

Hedonic Treader wrote:And I think that if I had your preferences, I'd probably be more neurotic about the limitations of naive realism, illusions of agency etc., in our current evolved minds, beyond their practical implications.


I don't follow here. Could you elaborate?
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Utilitronium shockwave

Postby Hedonic Treader on 2012-08-06T20:00:00

peterhurford wrote:I don't follow here. Could you elaborate?

No. Instead, I withdraw the statement. :)
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Postby Brian Tomasik on 2012-12-24T07:58:00

It may have already been mentioned, but I wanted to emphasize that utilitronium would be potentially risky until we understand at a deep level how the components of conscious suffering and conscious pleasure work themselves out.

In particular, imagine that there's one component of suffering that corresponds to an internal "register dislike" feeling by the brain that goes toward long-term memories, long-term motivations, etc., and there's another more visible component that produces short-term reactions of writhing, withdrawal, screaming, etc. Similarly for pleasure there might be long-term internal-rearrangement components and short-term visible components.

We might build a naive utilitronium agent that doesn't have any internal components and only displays the external features. In fact, we could basically do this today with sophisticated dolls or whatnot. These agents wouldn't actually have moral value.

What would be worse, though, would be if we tried to hook up visible responses of pleasure to an internal evaluation-of-experience module, but we got the wrong internal evaluation-of-experience module: One that corresponded to suffering instead of pleasure. It's plausible that the algorithms of suffering and pleasure aren't that different, so I think this is more than a whimsical suggestion. Let's make sure we really know what we're doing before we unleash utilitronium on a mass scale.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA


Re: Utilitronium shockwave

Postby Hedonic Treader on 2013-04-18T08:54:00

Elijah wrote:Nope. Much safer, cleaner and more risk-free to ask for total annihilation of everything that was, is and ever shall be.

Well, our existence is evidence that no one will ever do that. You are here now, which means no one will ever prevent you from being here now.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am



Return to General discussion