What does an increase in existential risk mean? It means that earth-originating life is less likely to create a galaxy-spanning or even intergalactic colonization process. I recently read an essay that estimated there could be about 10^41 additional life-years resulting from such a process.
Would that be good? Maybe yes. Or maybe no. Are we talking about post-Abolitionism minds that are free from suffering by design? Or are we talking about 10^30-10^40 additional torture victims? Will continued existence be voluntary for all these minds? Will a significant percentage of them desperately wish they had never been forced into existence? Can we predict this?
Nick Bostrom and others are right in pointing out that the scope of this future makes it far more relevant on Utilitarian grounds than any other more local question. What I find troubling is the ease in which these authors jump to the assumption that life is probably generally worth living, therefore it must be good to create all these additional sentient entities. I guess to you here it is rather obvious that a number like 10^40 calls for an extremely thorough analysis of the reasons and conditions under which this assumption is actually true.
I think from a Negative-Utilitarian perspective, increases in existential risk are generally good, even though they can imply more suffering on earth, depending on exactly how the risks would materialize. After all, the expected value of sentient observer moments is reduced if even a small probability of space colonization or pocket-universe-creation is prevented. From a classical or average-maximizing Utilitarian perspective, it strongly depends on the quality of the observer moments that will exist in such a future. Is there any way to address this question without resorting to pure speculation?
Would that be good? Maybe yes. Or maybe no. Are we talking about post-Abolitionism minds that are free from suffering by design? Or are we talking about 10^30-10^40 additional torture victims? Will continued existence be voluntary for all these minds? Will a significant percentage of them desperately wish they had never been forced into existence? Can we predict this?
Nick Bostrom and others are right in pointing out that the scope of this future makes it far more relevant on Utilitarian grounds than any other more local question. What I find troubling is the ease in which these authors jump to the assumption that life is probably generally worth living, therefore it must be good to create all these additional sentient entities. I guess to you here it is rather obvious that a number like 10^40 calls for an extremely thorough analysis of the reasons and conditions under which this assumption is actually true.
I think from a Negative-Utilitarian perspective, increases in existential risk are generally good, even though they can imply more suffering on earth, depending on exactly how the risks would materialize. After all, the expected value of sentient observer moments is reduced if even a small probability of space colonization or pocket-universe-creation is prevented. From a classical or average-maximizing Utilitarian perspective, it strongly depends on the quality of the observer moments that will exist in such a future. Is there any way to address this question without resorting to pure speculation?