I thought I just had:
According to the simulation argument, either (1) humanity will soon become extinct; (2) posthumanity will never run ancestor simulations; or (3) we are almost certainly living in a simulation. Suppose (1) is true. Then the classical utilitarian case for focusing on existential risk reduction loses much of its force, since we are by assumption doomed to perish quickly anyway. Now suppose (3) is true. Here it seems plausible that the simulators will restart the simulation very quickly after the sims manage to kill themselves. So the case for focusing on existential risk is also weakened considerably. It is only on the second of the three scenarios that extinction is (roughly) as bad as classical utilitarians take it to be. So we can conclude: if you think there is a chance that posthumanity will run ancestor simulations (~2), the prospect of human extinction is much less serious than you thought it was.
Thoughts?
According to the simulation argument, either (1) humanity will soon become extinct; (2) posthumanity will never run ancestor simulations; or (3) we are almost certainly living in a simulation. Suppose (1) is true. Then the classical utilitarian case for focusing on existential risk reduction loses much of its force, since we are by assumption doomed to perish quickly anyway. Now suppose (3) is true. Here it seems plausible that the simulators will restart the simulation very quickly after the sims manage to kill themselves. So the case for focusing on existential risk is also weakened considerably. It is only on the second of the three scenarios that extinction is (roughly) as bad as classical utilitarians take it to be. So we can conclude: if you think there is a chance that posthumanity will run ancestor simulations (~2), the prospect of human extinction is much less serious than you thought it was.
Thoughts?