The simulation argument and human extinction

Whether it's pushpin, poetry or neither, you can discuss it here.

The simulation argument and human extinction

Postby Pablo Stafforini on 2013-05-13T01:55:00

I thought I just had:

According to the simulation argument, either (1) humanity will soon become extinct; (2) posthumanity will never run ancestor simulations; or (3) we are almost certainly living in a simulation. Suppose (1) is true. Then the classical utilitarian case for focusing on existential risk reduction loses much of its force, since we are by assumption doomed to perish quickly anyway. Now suppose (3) is true. Here it seems plausible that the simulators will restart the simulation very quickly after the sims manage to kill themselves. So the case for focusing on existential risk is also weakened considerably. It is only on the second of the three scenarios that extinction is (roughly) as bad as classical utilitarians take it to be. So we can conclude: if you think there is a chance that posthumanity will run ancestor simulations (~2), the prospect of human extinction is much less serious than you thought it was.

Thoughts?
"‘Méchanique Sociale’ may one day take her place along with ‘Mécanique Celeste’, throned each upon the double-sided height of one maximum principle, the supreme pinnacle of moral as of physical science." -- Francis Ysidro Edgeworth
User avatar
Pablo Stafforini
 
Posts: 177
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford

Re: The simulation argument and human extinction

Postby Hedonic Treader on 2013-05-13T04:49:00

Interesting. I wonder if this hasn't been discussed by Bostrom and others before.

Generally speaking, it seems that if you have evidence that your reality may be more short-lived than you thought, this is a good reason to favor the near future over the far future. Assuming many ancestor simulations will be more like "The Sims" rather than "Sim Earth", there's not much value in suffering near-term for long-term gain (as your reality would be shut down after some limited time no matter what you do).

However, we could conceive of ways to make (2) more likely, e.g. by discouraging ancestor simulations, encouraging alternative resource use etc.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: The simulation argument and human extinction

Postby Humphrey Schneider on 2013-05-16T20:53:00

Pablo, your idea would also be interesting in the context of my tread: "Why not even Negative Utilitarians should support efilism"
"The idea of a necessary evil is necessarily the root of all evil"

Humphrey Schneider
 
Posts: 36
Joined: Wed Jan 02, 2013 7:04 pm

Re: The simulation argument and human extinction

Postby aronvallinder on 2013-05-24T13:38:00

This is very interesting, Pablo. Though I think that, unless one is extremely confident that (2) is false, the case for focusing on existential risk is still quite strong. Suppose for instance one assigns credence 1/3 to each of the disjuncts. After taking your argument into account, the expected value of reducing existential risk would then be 1/3 of what one previously thought it was. Given the relative magnitude of the problem, I think this doesn't make much of a difference. Furthermore, I think this applies to any reasonable credence distribution.

Another possibility is that posthumans run ancestor simulations precisely because they want to generate strategies for mitigating existential risk. Suppose the simulators have developed WBE, but not any singleton-type superintelligence. They could then run lots of ancestor simulations at great speed to find out what civilizations that manage to avoid existential disasters have in common, in order to implement similar strategies themselves. In that case, researching existential risk could also affect the bottom-level universe. So even if we're in a simulation, reducing existential risk could be very important.

One could object that if posthumanity isn't only able to run WBEs, but also able to do accurate simulations of the physical universe, they would already be in a position to deal with existential risk, and that they consequently wouldn't need to run simulations for this purpose.

Of course, it could be that once we find successful mitigation strategies, the simulators shut down all simulations, which (depending on how they decide to use their computational resources instead) could be very bad from the perspective of a classical utilitarian.

In general, though, I think it's good to be sceptical of arguments that assume too much about the motives of posthumanity...

aronvallinder
 
Posts: 1
Joined: Tue May 14, 2013 6:06 pm

Re: The simulation argument and human extinction

Postby peterhurford on 2013-05-24T14:34:00

aronvallinder wrote:Suppose the simulators have developed WBE, but not any singleton-type superintelligence. They could then run lots of ancestor simulations at great speed to find out what civilizations that manage to avoid existential disasters have in common, in order to implement similar strategies themselves.


I would be amused if it turned out that every simulation merely turned to WBEs to try to simulate an answer for their own society. I suppose the simulation could make WBE somehow impossible, though.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: The simulation argument and human extinction

Postby Brian Tomasik on 2013-06-17T01:43:00

This is fascinating, Pablo!

I think horn #3 is most likely. This is not because it seems intrinsically most plausible (in fact, #1 seems intrinsically most plausible to me, or maybe #2 depending on which horn a paperclip maximizer falls into) but instead because, across all universes, if even a small number of universes do ancestor sims, those will dominate in the anthropics. The self-sampling assumption combined with modal realism begins to look like the self-indication assumption.

What if I'm wrong about modal realism? Isn't it overconfident to assume that modal realism is true? Yes, it is. However, if modal realism is not true, then the universe is much smaller, and my actions have vastly smaller impact (because I have fewer copies and fewer near-copies that I'm influencing). So I should act as though modal realism is true, because if it is, everything matters a lot more.

What implications being in a sim has for the question of extinction risk is a fascinating topic. Because there are so many kinds of sims, there remains a vast space of possibilities.

Hedonic Treader's suggestion of making (2) more likely by discouraging ancestor sims is interesting. I think it's senseless to causal decision theorists, but it makes sense to evidential and timeless decision theorists.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: The simulation argument and human extinction

Postby Arepo on 2013-11-20T11:45:00

It seems as though you can still accept 3, and make a (weaker) case for existential risk. The simulator’s world might not be constrained by the same thermodynamic restrictions as ours, in which case if we thrive, we might still be sophisticated enough as sims to populate the universe, simulate hedonium, etc. If we die and the simulators make another group to replace us, they might not do the same (and you might take our extinction as evidence that the simulators are likely to create sims who’re likely to wipe themselves out).
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: The simulation argument and human extinction

Postby DanielLC on 2013-11-20T23:26:00

across all universes, if even a small number of universes do ancestor sims, those will dominate in the anthropics.


You assume ancestor sims is the only way to have a large population. If you have the technology to emulate human minds easily, why limit the use to having armies of slaves for human testing? What about kids? Sure you might only want a few, but each of them will want a few, etc.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm


Return to General discussion