Are increases in existential risks good or bad?

Whether it's pushpin, poetry or neither, you can discuss it here.

Are increases in existential risks good or bad?

Postby Hedonic Treader on 2011-04-19T20:44:00

What does an increase in existential risk mean? It means that earth-originating life is less likely to create a galaxy-spanning or even intergalactic colonization process. I recently read an essay that estimated there could be about 10^41 additional life-years resulting from such a process.

Would that be good? Maybe yes. Or maybe no. Are we talking about post-Abolitionism minds that are free from suffering by design? Or are we talking about 10^30-10^40 additional torture victims? Will continued existence be voluntary for all these minds? Will a significant percentage of them desperately wish they had never been forced into existence? Can we predict this?

Nick Bostrom and others are right in pointing out that the scope of this future makes it far more relevant on Utilitarian grounds than any other more local question. What I find troubling is the ease in which these authors jump to the assumption that life is probably generally worth living, therefore it must be good to create all these additional sentient entities. I guess to you here it is rather obvious that a number like 10^40 calls for an extremely thorough analysis of the reasons and conditions under which this assumption is actually true.

I think from a Negative-Utilitarian perspective, increases in existential risk are generally good, even though they can imply more suffering on earth, depending on exactly how the risks would materialize. After all, the expected value of sentient observer moments is reduced if even a small probability of space colonization or pocket-universe-creation is prevented. From a classical or average-maximizing Utilitarian perspective, it strongly depends on the quality of the observer moments that will exist in such a future. Is there any way to address this question without resorting to pure speculation?
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Are increases in existential risks good or bad?

Postby Jesper Östman on 2011-04-19T21:34:00

One important consideration for how a negative utilitarian should view existential risk is what a reasonable estimate of alien life in our future light cone is. There might be astronomical amounts of alien suffering out there (probably non-spacefaring, due to great filter considerations) In this case, if we can expect the colonizers not to suffer astronomically space colonization would be right even for negative utilitarians (although not as good as the vacuum state transition some theories implied that LHC might cause).

Of course, as you rightly point out, due to the overwhelming importance of the question a much more rigorous investigation will be needed to get any good advice on how to view existential risk.

I discuss parts of the happiness assumption here: http://felicifia.org/viewtopic.php?f=23&t=348

At least prima facie it seems that (at least) fairly happy space colonizers are far more probable than severely tortured colonizers. Furthermore I think another type of important considerations when estimating things like this are considerations from expect future evolution and societal change, such as in: http://www.nickbostrom.com/fut/evolution.html

Pocket universe creation is another issue. At least for a neg utilitarian it seems to belong to a bunch of low risk scenarios with astronomical or even infinite negative expected value (another might be the risk of a sadistic superintelligence). Here considerations about alien civilizations in our future light-cone might also be important, since there is also the possibility that our colonization might prevent them from doing such things. However, I think that will affect this question less.

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Are increases in existential risks good or bad?

Postby Hedonic Treader on 2011-04-19T21:57:00

Jesper Östman wrote:One important consideration for how a negative utilitarian should view existential risk is what a reasonable estimate of alien life in our future light cone is. There might be astronomical amounts of alien suffering out there (probably non-spacefaring, due to great filter considerations) In this case, if we can expect the colonizers not to suffer astronomically space colonization would be right even for negative utilitarians (although not as good as the vacuum state transition some theories implied that LHC might cause).


You're right, this is a relevant consideration. I wonder if there is an established formula for an expected utility calculus that takes such probabilities as variables, similar to the Drake equation. I recently came across a paper that examined the probability of planetary anthropic selection for climate change, with the result that planets with Earth-like biospheres are probably rare:

Planetary anthropic selection, the idea that Earth has unusual properties since, otherwise, we would not be here to observe it, is a controversial idea. This paper proposes a methodology by which to test anthropic proposals by comparison of Earth to synthetic populations of Earth-like planets. The paper illustrates this approach by investigating possible anthropic selection for high (or low) rates of Milankovitch-driven climate change. Three separate tests are investigated: (1) Earth-Moon properties and their effect on obliquity; (2) Individual planet locations and their effect on eccentricity variation; (3) The overall structure of the Solar System and its effect on eccentricity variation. In all three cases, the actual Earth/Solar System has unusually low Milankovitch frequencies compared to similar alternative systems. All three results are statistically significant at the 5% or better level, and the probability of all three occurring by chance is less than 10−5. It therefore appears that there has been anthropic selection for slow Milankovitch cycles. This implies possible selection for a stable climate, which, if true, undermines the Gaia hypothesis and also suggests that planets with Earth-like levels of biodiversity are likely to be very rare.


This would indicate a lower expected value of suffering alien wildlife in our future light-cone. As for vacuum state transition, the science fiction novel "Manifold:Time" entertains the idea that it could in fact be a means of massive pocket-universe creation. Speculation, of course, but a dreadful one to the neg util sentiment of a clean solution to universal suffering.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Are increases in existential risks good or bad?

Postby Jesper Östman on 2011-04-19T22:38:00

I doubt there is such an equation currently, but it would be very useful to have one.

Interesting article. Yes, it seems to provide reason to lower the expected utility of such actions by 5 orders. Though it might still be high.

In general, I think the person who has spent most time with questions like these is Alan Dawrst. If he doesn't show up here it might be a good idea to contact him http://www.utilitarian-essays.com/

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Are increases in existential risks good or bad?

Postby RyanCarey on 2011-04-20T08:23:00

It seems that increases in existential risk are good from a negative-utilitarian perspective. However, I'm an ordinary utilitarian, and I believe that there's symmetry between positive and negative utility. The evaluation of existential risks then comes down to:
1) a decision about whether people are, on balance, happy, and whether they, on balance, bring about happiness
2) consideration of whether people will be able to become significantly better at bringing about happiness in the future

I decided that although I am equivocal about 1, I think that humans will more likely than not become increasingly ethical in the future. Existential risk, then, must be decreased. Surely, existential risk should be decreased by alerting people to its existence. The question, then, is how we should inform people about existential risk. Long story short, I don't think there's any easy way to do this. People hate talking about existential risk. Before we do this, I think we need to make people more receptive to the idea. I think the best way to do this is to promote utilitarianism. Even this may be too ambitious. Maybe, we should promote the idea that ethics is something that intelligent people should be willing to talk about in groups, in public, and with pride. I'm not sure if that's something I'll see in my lifetime, but it seems reasonable to strive for it.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Are increases in existential risks good or bad?

Postby DanielLC on 2011-04-20T16:13:00

People are unlikely to build a society where they're unhappy when they have the power to do otherwise. It's feasible that they'll give animals lives not worth living, but there's a limited amount of energy in the universe, and people aren't going to waste it on wild animals. It's possible that people won't grow to such numbers that they need all the energy, but then there's so few of them in comparison that it doesn't matter.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Are increases in existential risks good or bad?

Postby Jesper Östman on 2011-04-20T16:49:00

Ryan:

I'd like to see more utilitarians, but I think it's very hard to convince people who don't have the right personality and emotional profile from the start. The resistance to many utilitarian ideas is deep and emotional and very hard to affect. I think direct promotion of existential risk is a lot mroe promising. At least from my experience, many people who aren't pure utilitarians, or even far from it, think these questions are extremely important after one has explained them.

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Are increases in existential risks good or bad?

Postby DanielLC on 2011-04-20T18:48:00

I wonder if it would be helpful to convince them to be rationalist first. I think most of the people on Less Wrong are Consequentialist.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Are increases in existential risks good or bad?

Postby Jesper Östman on 2011-04-21T19:26:00

Could be useful. From a utilitarian perspective, it might be roughly as useful with more consequentialists of certain type than more pure utilitarians (and also easier to achieve).

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Are increases in existential risks good or bad?

Postby Brian Tomasik on 2011-04-23T09:54:00

Thanks for this great discussion, Hedonic Treader! I normally have a lot to say on this topic, but you guys seem to have hit the major points already: Thinking about "cosmic rescue missions" to suffering extraterrestrials in our future light cone, the Drake equation, the potential expansion of suffering via our own space colonization, and the harm of creating new universes. (Are you sure we haven't talked before? ;) )

It's plausible that a fair fraction of the computing resources of civilizations are devoted to suffering. This could be true if suffering is computationally productive (e.g., sentient reinforcement-learning algorithms?) or if civilizations do scientific research using conscious simulations. So future human computational resources could be a cause for concern; however, other civilizations might run suffering computations as well (perhaps in vast quantities?), so it's possible that humans could prevent lots of suffering by trading with other civilizations, providing resources which are cheap for humans to create in return for fewer suffering computations run by the ETs.

You're right that given the number of sentient minds at stake here, this issue deserves detailed study. That said, my current stance is to punt on the question of existential risk and instead to support activities that, if humans do survive, will encourage our descendants to reduce rather than multiply suffering in their light cone. This is why I donate to Vegan Outreach and The Humane League, to spread awareness of how bad suffering is and how much animal suffering matters, with the hope that this will eventually blossom into greater concern for the preponderate amounts of suffering in the wild.

One last thought: If we ignore all cosmic considerations and just consider wild animals that will live on earth for the next two billion years before our swelling sun destroys them, then human survival could very well be net beneficial. In the present, habitat loss prevents existence for wild animals that would have lived in those environments. And as humans grow in technological sophistication, they may destroy nature entirely in order to harness its resources for computation or other uses.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Are increases in existential risks good or bad?

Postby Hedonic Treader on 2011-04-30T00:21:00

Hi Alan, you raise some points of concern I share about the future.

It's plausible that a fair fraction of the computing resources of civilizations are devoted to suffering. This could be true if suffering is computationally productive (e.g., sentient reinforcement-learning algorithms?) or if civilizations do scientific research using conscious simulations. So future human computational resources could be a cause for concern; however, other civilizations might run suffering computations as well (perhaps in vast quantities?), so it's possible that humans could prevent lots of suffering by trading with other civilizations, providing resources which are cheap for humans to create in return for fewer suffering computations run by the ETs.

The second point strikes me as speculative. Given the vast time and space scales that would probably seperate us from other civilizations, it's hard to imagine an overlap of states in technological progress that would allow for meaningful trade between us and them. Alien wildlife or significantly more or less advanced civilizations seem to be more plausible reasons for concern.

As for computationally productive suffering, I'm reminded of Robin Hanson's em revolution scenario and his projection of a resulting economic race to the bottom where ems live at near-subsistence with high workloads. To my mind, the crucial question in any such scenario (sentient mass computation for productivity) is how much of this process can be driven by carrots, and how much really needs sticks. If David Pearce is right about his premise that the adaptive functions of suffering can be replaced by "gradients of bliss", i.e. differential hedonic states within a purely positive state-space, then there is no good reason why a civilization should opt for the sticks - with the exception of pre-committing to torture as a game-threoretic weapon in conflicts. Unfortunately, this weapon has been used abundandly by humans so far and it could reach a very large scale in a post-human future.

In the present, habitat loss prevents existence for wild animals that would have lived in those environments. And as humans grow in technological sophistication, they may destroy nature entirely in order to harness its resources for computation or other uses.

Right. It is surprising how underrepresented the meme for ethical ecosystem replacement/redesign is. When I mention it, people usually react as if I were motivated by maliciousness toward wild animals, or they are really surprised that anyone would question the ethical legitimacy of the natural status quo. Talks of "Mother Nature" and "Her" (!) alleged wisdom and benevolence are unfortunately widespread. Rational analyses like yours are frustratingly rare. Otoh, habitat loss also correlates with reduced sustainability of civilization, which is only good if a (post-)human future is very bad.

As for vegan outreach, merely trying to convince people of veganism is not sufficient imho. Awareness raising is highly relevant of course, but I think that in order to gain statistical grounds realistically, we really need to push hard for research and development of sustainable, cheap, healthy, high quality alternatives. Soy is not the (only) answer, and malnutrition doesn't convince majorities. In-vitro meat could be a solution, but maybe not the only one - if we could devolop an integrated process that could start with a biological substrate like tube-farmed algae, process it to derive proteins and other nutritional building blocks, and then create food products like eggs, meat etc. out of 3D-printers with virtually indistinguishable properties from the original animal products, we could phase out the whole paradigm of industrial animal use within decades. Once a proof of concept of this idea exists, if it is sustainable and economically competitive, that's when majorities will accept the ethical relevance of veganism. At that point, democracies could accept a complete ban of using sentient beings as physical/industrial resources for the first time in all of history. Before such alternatives exist, it will inevitably be rationalized by majorities of voters and consumers. The public discussion of animal rights and veg*anism has been too absorbed with consumer guilt vs. signals of moral superiority; it puts people off and it distracts from a rational analysis of the solution space.

Another worthwhile goal may be a scientific near-term estimate on how realistic hedonic enhancement is, and what its side-effects probably are. Understanding the brain, and how it represents affect, seem to be at the core of this question. One could take specific scientific hypotheses and work from there in the relatively short term. For instance, how strongly is compassion contingent upon the ability to suffer oneself? Does a reduced pain sensitivity automatically lead to a reduced level of compassion with the pain of others? If so, are there exceptions, and on what principles do they rely? I think this is relevant in order to prevent hedonic enhancement to create a generation of discompassionate people who can't understand why suffering is bad.

Additionally, the "gradients of well-being" feasibility hypothesis could be examined by taking specific aversive functions associated with suffering, and modelling solutions that would express them as positive gradients. For instance, the agony of suffocation, a very specific experience of negative affect in a very specific functional context, could be modelled in detail neurologically and information-theoretically. From this starting model, one could search for possible alternative implementations that would implement the same function by creating a strong lust for breathing (rather than a desperate need for it) under the specific conditions of suffocation. In other words, brain modules associated with desire and pleasure could be used to motivate breathing when blood oxygen is low, instead of the brain modules that currently give it negative affect. Whether this can be done in principle, and whether there is an information-theoretic difference in the general affective valence between the two implementations are extremely crucial empirical questions for hedonic enhancement, and I think they can investigated by neuroscience, again within a few decades.

It may make sense to push awareness of these specific questions into the public, scientific and political meme spheres.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Are increases in existential risks good or bad?

Postby Jesper Östman on 2011-04-30T18:43:00

Many good ideas. A couple of points and comments:

At least prima facie future em worker communities don't seem likely to contain much suffering. Note that Hanson himself believes that they will be happy, despite living on the subsistence level. But more general arguments are possible, based on the assumption that there will be strong selection on very high productivity among ems. Dedicated workaholics seem to have productivity levels far superior to suffering forced slave labor. So we should expect the former rather than the latter, and they don't see likely to suffer much (but perhaps some people would think they wouldn't be very "happy" or have a high level of "welfare" due to eudaimonistic concerns) For some thoughts on the lives of such ems, see eg. http://www.overcomingbias.com/2011/04/a ... human.html
http://www.overcomingbias.com/2011/04/w ... death.html

Furthermore, there might be selection effects for ems which experience nothing but peak productivity states (and some learning perhaps) See: http://singinst.org/upload/WBE-superorganisms.pdf

My main worry for suffering would be if it would turn out that crude behavioristic learning methods would be superior for high quality tasks. Then that could be a potential source of much suffering.

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Are increases in existential risks good or bad?

Postby Brian Tomasik on 2011-04-30T21:18:00

Jesper Östman wrote:My main worry for suffering would be if it would turn out that crude behavioristic learning methods would be superior for high quality tasks. Then that could be a potential source of much suffering.


Yes, I think this is what I had in mind.

One could also imagine suffering resulting from evolutionary mind-optimization algorithms in which lots of minds are tried, and a few are chosen based on their abilities to weather the obstacles they're supposed to solve. Wild animals are examples of such minds, and the vast suffering in nature is the optimization process in action.

Then also there's potential suffering from simulations, presumably run for scientific purposes. From an anthropic perspective, this may best explain our own experiences, since we don't seem to be worker ems or crude learning algorithms being used for some larger task. That said, maybe we're being run to optimize something that isn't obvious to us.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA


Return to General discussion