Existential risk reduction cost effectiveness

Whether it's pushpin, poetry or neither, you can discuss it here.

Existential risk reduction cost effectiveness

Postby Jesper Östman on 2010-12-25T18:02:00

My plan is to do a quick fermi-calculation for the minimum(note1) expected happy(note2) life years per dollar (hly/$) we can gain by reducing existential risk. Such a number will help us decide whether to use our resources for reducing existential risk or other purposes, like reducing meat consumption or helping poor humans. I will do this by using results from two superb papers, Nick Bostrom's "Astronomical Waste" and Gaverick Matheny's "Reducing the Risk of Human Extinction" and a couple of other assumptions. Matheny calculates the exptected hly/$ gain assuming humans remain on earth. Bostrom considers the implications of space-colonization. I will modify Matheny's result for space colonization considerations (c-adjusted).

Matheny's result, without time-discounting, is that an asteroid screening program will give us 0.4 hly/$. The source of the hly are the expected lives of all the future human generations which will live on earth if an asteroid does not make humans go extinct (within the next 100 years, he assumes that after that point we will be able to handle asteroids in any case??). Since he assumes that the human population remains on earth, the number of expected future humans will be astronomically larger if we assume a succesful human space colonization. I will make a rough estimate of this number multiplying Matheny's hly/$ result with the ratio of the c-adjusted total hly number and Matheny's hly number.

We get the total number of human hly by multiplying the population size (hly/y) with its lifetime (y). Matheny assumes a population of 10^10 which lasts for 1.6*10^6 years. That totals 1.6*10^16 hly. How much larger is the c-adjusted number? Bostrom considers the utility of a colonization of our local supercluster Virgo. According to him the cluster can support 10^23 humans, assuming just a conservative 10^10 humans on average around each star (note3). This is my c-adjusted hly/y estimate. Assuming we can get this energy output for 10^11 years(note4) our total is 10^34 hly. That gives us a ratio between the c-adjusted result and Matheny's hly of about 10^18.

Matheny's non-discounted hly/$ number was 0.4 So the c-adjusted hly/$ will be 4*10^17. Assuming the probability of a happy supercluster colonization is only 0.01 we still get the result that the c-adjusted number is an astronomical 4*10^15 hly/$. This is a huge number. It says that each dollar in the asteroid screening program will net us an expected four million billions of happy life years.

For comparison, estimated hly/$ for top non-existential-risk utilitarian interventions:

Vegan Outreach: 0.55-25 hly/$ (estimate by Alan Dawrst "How much is a dollar worth, the case of vegan outreach")
VillageReach vaccination: 0.15 hly/$ (saves a life for 545$, although it's unlikely I assume that one saved life gives 80 hly, so we get some 0.15 hly/$)

My conclusion is that even if an (acceptable by utilitarian standards) supercluster colonization is a lot less likely than 1/100 for a utilitarian asteroid screening and other long-term existential risk interventions will give far more expected utility than even the most effective short-term interventions.

Some comments:
a) Note that nothing more speculative than technology for self-replicating space-colonization and some hedonic enhancement is needed for this result (eg no singularity, superintelligence, molecular nanotechnology or uploading).

b) In particular, note that with access to conscious ai or brain emulation and powerful energy harvesting technology we could get up to 17 orders of magnitude more hly/$ (see note 3).

c) The 1/100 likelihood assumption includes the possibilities that we can't get space colonization technology, that we become extinct for some other reason, or that we wouldn't be motivated to colonize that much. Subjectively, I think the biggest obstacle here is the other risks. However, two considerations may make it reasonable not to give survival estimates several orders of magnitude less than this: (1) many experts in the global catastrophic risk subject believe that the era before we start colonizing space will be especially dangerous (2) all of these experts give survival estimates above 50%. (note5)

d) Asteroid screening may not be the most effective way to reduce existential risk. Compare, for example the utility of building a self-sustaining bunker. According to Matheny its cost would be in the same order of magnitude as the asteroid program. The latter would reduce a 1 in a billion risk by 50%. Subjectively, it seems like a conservative estimate is that a bunker could reduce a total risk of say at least 10% from biotech and other technologies to at least 9.5%. If that's the case than we would get some 7 orders of magnitude more utility. In this case, if we also use Bostrom's more liberal estimate we could get up to 4*10^39 hly/$ from a bunker project.


Note 1
Minimum because it is based on the numbers for an asteroid screening program, there may be other ways of reducing existential risk which are even more effective.

Note 2
I assume that future hedonic enhancement technology will ensure that the overwhelming majority of all these future life years will be "happy" from a hedonistic utilitarian perspective.

Note 3

Bostrom also considers a more ambitious scenario. In this scenario we use advanced molecular nano-technology to harness the total computing power from each star and use it to run as many human minds as possible. In this case we would get 17 orders of magnitude more hly.

Note 4

100*10^9 years =10^11 years, assuming the energy output will be roughly constant for the "current era of star formation".

"The current era of star formation is expected to continue for up to one hundred billion years, and then the "stellar age" will wind down after about ten trillion to one hundred trillion years (1013-1014 years), as the smallest, longest-lived stars in our astrosphere, tiny red dwarfs, begin to fade. At the end of the stellar age, galaxies will be composed of compact objects: brown dwarfs, white dwarfs that are cooling or cold ("black dwarfs"), neutron stars, and black holes. Eventually, as a result of gravitational relaxation, all stars will either fall into central supermassive black holes or be flung into intergalactic space as a result of collisions.[95][96]"
from http://en.wikipedia.org/wiki/Galaxy#Formation_and_evolution

Note 5

These are some of the main sources on existential risk estimates, note that all are far below 100% risk. Of course, if the risk post space colonization hasn't decreased enough cumulative risk over long periods of time could get very high. But it seems that with an exponentially increasing expansion most risks would quickly decrease.

Estimated probability of extinction (or similar):
50% (of a disastrous setback of civilization) in the next 100 years according to Sir Martin Reese Our final century/hour (2004)
30% in the next 500 years according to John Leslie - The End of the World (1996)
"Significant risk" according to Richard Posener - Catastrophe: Risk and Response (2005)
>25% Bostrom, "Existential Risks" (2002)

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Existential risk reduction cost effectiveness

Postby DanielLC on 2010-12-25T21:31:00

First off, is the doomsday argument taken into account? Also, EDT could lessen the effect, depending on how your priors work. We don't really know how dangerous the universe is, and if the prior for how dangerous the universe is isn't independent from the priors for how long humanity would live given an amount of danger, this means making the universe safer will mean that the universe was more dangerous in the first place.

Considering that if the prior for total number of people doesn't fall off quickly enough expected utility will diverge, but this problem doesn't apply to anything else, it seems somewhat reasonable to make the the priors weird like that.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Existential risk reduction cost effectiveness

Postby Jesper Östman on 2010-12-25T23:12:00

Good point. No, I haven't included Doomsday, Filter, or Simulation considerations. The same goes for these particular papers from Bostrom and Matheny. Perhaps it can be argued that these arguments ensure that it is virtually impossible for humanity to survive long enough for a substantial space colonization. Of course, these arguments are also controversial.

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Existential risk reduction cost effectiveness

Postby rehoot on 2010-12-26T00:44:00

Maybe rough calculations would help to put alternatives into perspective.

Things to consider:

1) Adjust the cost of the plan by sending robots (instead of humans). They would carry some frozen germs or embryos for future reanimation--or maybe give the robots the knowledge to synthesis life from inanimate chemicals that can either be shipped or discovered at the destination (that reduces the risk that radiation would mutate everything in transit). Robots would build the technological infrastructure as the human population grows (calculated the risk that the robots would turn the humans into slaves). The robots would run on Mac OS X or Ubuntu, not Windows!!!

2) Consider social interaction effects. For example, reallocation of money to a project to save a few White families (capitalist families, wealthy families...) leads to civil unrest, then anarchy, then self inihilation.

3) Consider alternatives to travel expences (e.g., use SETI to call for a ride).

4) After having committed to the recolonization plan, an asteroid approaches, and is a few years away with immenant collision. Finger-pointing commences. The recolonization plan is scrapped, but the asteroid either misses or does not kill everybody, but the recolonization plan is dead with the initial investment serving only as an amusement park.

5) What is the cost relative to alternative plans for survival in this solar system (e.g., the bunker you mentioned, living under the sea, living on Mars...).

6) What would be the cost of sending a self-perpetuating contingent of humans that could maintain the technology needed to continue planet-hoping? That expense might raise total cost exponentially.

7) Does a planet with 1 x 10^10 people really imply that they are happy?

8) What is the basis for the 1.6*10^6 years of human existence on earth? There might be great variation based on that number. Would the numbers change if some survivers lived and devolved into apes or an intermediate between humans and apes?

9) Consider unforeseen effects of relocating (radiation, traveling for four generations only to be inihilted by a tiny piece of space junk). This might demand a multiplicity of voyages and thereby increase costs.

rehoot
 
Posts: 161
Joined: Wed Dec 15, 2010 7:32 pm

Re: Existential risk reduction cost effectiveness

Postby RyanCarey on 2010-12-26T08:57:00

As far as I can understand, you've argued that preserving the life on Earth is far more important that it might appear, because we can colonise other planets. But that only addresses one of the two key questions here. The other key question is whether humans (presently, and in the future) will contribute or detract from the amount of happiness in the universe.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Existential risk reduction cost effectiveness

Postby Jesper Östman on 2010-12-26T16:59:00

Good point Ryan, I'll reply in two parts.

Can we assume that the lives of space colonizing humans would on average make a positive contribution? I'll split this question into two subquestions: (I) Can we expect future human lives to be happy on average? (II) Would they create enough (animal) suffering to outweigh their own happiness?

(I): The happiness assumption

(1). Personally, I think it is very likely that future human lives will be above 0, or even far above 0. So to me (2) seems to be the important concern here. Let me explain. My reason is that use of hedonic enhancement technology will likely be widespread. This is because such technology will be (a) available, (b) cheap and (c) in great demand.

(a) It seems unlikely that a civilisation with the capability for star-faring would not be capable of hedonic enhancement. In particular it seems hard to deny our space colonizing future humans such technology since the time-scales we are considering are thousands, millions or even billions of years and the resources available to them will be immense (perhaps somewhere between II and III on the Kardashev scale). For example, since most of our current western long-term happiness variation is determined by our genes specially designed drugs or gene therapy should be able to make average future humans much happier than contemporary humans. At the very least gene therapy should allow the average future human to have life about as good as happiest contemporary humans. So if any human in history had a life worth living, then the combined happiness of a space colonizing civilisation using such technology will amount to an astronomical positive number. An uploaded non-biological posthuman civilisation could of course enhance its hedonic states even easier through mere software manipulation.

Furthermore, it is likely the far future humans we are considering could achieve mental states far happier and more pleasant, perhaps by several orders of magnitude, than even the happiest moments in contemporary human lives. If that is likely the case the cost effectiveness of existentialrisk reduction would increase by several orders of magnitude again.

(b) Would the technology be availabe to most people? Manipulating our brains (or computer software) would neither require huge amounts of energy nor natural resources, so we can expect that basic hedonic enhancement technology will eventually become cheap and readily available. This seems especially likely when we consider timeframes of several millions of years of technological development.

(c) Since people are prepared to spend large amounts of money on products which promise to increase happiness in different ways but are very ineffective (eg much of our material consumption) at doing this or have big downsides (eg many contemporary recreational drugs) it seems clear almost all people would strongly prefer to use cheap, reliable hedonicenhancement technology when available. This should especially be the case when the use of such technology has thousands of years available for becoming a part of the culture.

My conclusion on (I) is that we have good reason to believe supercluster colonizing humans (posthumans) will be happy.

(to be continued)

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Existential risk reduction cost effectiveness

Postby Jesper Östman on 2010-12-26T22:21:00

Rehoot:

Thanks for the comments! Note that what I'm assuming is that humans will likely colonize space if we don't die out during a transitional period. So the cost-effectiveness calculations are for the interventions I'm considering (asteroid screening program, building a self-sustaining bunker) and not space exploration, since that's assumed to happen anyway as long as we survive (of course, trying to get an earlier/quicker/cheaper space colonization would also be a strategy for avoiding existential risk).

I take it that 1) and 3), 4) 6) concern optimal space colonization and not the other projects for reducing existential risk. In particular I agree that digital life-forms seem more suitable for star faring.

2) I take it this point is about the bunker suggestion. Yes, that would be one thing to consider. But I'm not sure how likely such unrest would be as long as there is no catastrophe in sight.

7) See post above.
8) That number might be more precise for apes than for humans actually. It is Matheny's estimate, based on the life-span of our closest relative, homo erectus.
9) Incidentally, not that dangerous

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Existential risk reduction cost effectiveness

Postby Arepo on 2011-01-04T21:01:00

I have a couple of reservations (in addition to a general scepticism about Bostrom's premises):

1) It's not obvious to me how 'short-term' short-term happiness is, so it's not obvious we're comparing like with like. Existential risk reduction looks good because we assume its effects are felt a long way off.

Immediate happiness gets it in the neck because we don't assume hedons are powerful enough self-replicators to affect the indefinite future. But if you can find hedon-generating cause whose hedons are expected to replicate >1 times, you have near-infinite expected future hedons from just generating one now - a fair bit worse than 1:1 replication can still allow very high expectation, IIRC, using similar maths to Bostrom's - you don't need high probability of future survival/replication to give the hedons room to expand in expectation-space (if that's a useful concept), so long as you have a sufficiently high potential number.

A simple example is just being pleasant to someone. It's pretty clear that people who've recently had a pleasant interaction are - in at least some cases - more likely to interact pleasantly. It's not clear how much, obviously. And presumably there will be more effective hedon-replicating memes than simple pleasantries.

2) Comparing existential risk reductions with each other is incredibly difficult. Some, like asteroid defence are relatively concrete, but others, like reducing the risk of nuclear/biological war (which seem - without much justification that I can offer - like much bigger threats to me than a species-killing impact, whose chances in the next century I found from a quick search a 1/5000 guesstimate for - but it's not clear whether they're restricting the possibilities to an impact that would actually wipe out humanity), depend on the unmodelable interaction of societies and certain key individuals.

If one assumes that human-animosity-caused extinction events are as or more likely than any others, then a combination of 1) and 2) make me think that 'conventional' causes look much better than futurist thinkers tend to view them. Partly because happiness propagation is potentially massively better than we normally see it, and partly because effective happiness propagation itself, using the reasoning in 1), seems like it must have an impact on reducing the quantity of animosity and thus the likelihood that animosity will destroy the world.

On this view, Toby's proposed charities look stronger than ever to me (although I can imagine complications with human psychology, eg people being more grateful for cure than prevention, even though the latter tends to be much more cost-effective in its own right), though animal welfare charities seem likely to suffer - animals are in a very weak position to spread memes. Promoting utilitarianism seems like a potential winner, too. I also feel that attention to climate change is potentially high on the list, since it's a political issue that economically threatens many of the more powerful players (and thus threatens to increase their animosity).
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Existential risk reduction cost effectiveness

Postby Jesper Östman on 2011-01-31T14:44:00

DanielLC:

About the doomsday argument:
It is an interesting challenge for the importance of existential risk reduction, since unlike other challenges (eg that space colonization might be hard or impossible) it makes future scenarios more unlikely the more people they contain. Because of this, where most other challenges would, if succesful, only reduce reduce the likelihood of a huge future population (and thus the expected utility from astronomical risk reduction) by a few orders of magnitude the doomsday argument could potentially bring the astronomical expected utility down to an "earthly" size.

However, we must also take into account the probability of the doomsday argument being incorrect. Since it is controversial I wouldn't assign a probability higher than 0.5 to it. But even if one is fairly convinced that the argument is correct a certainty far above 0.99 seems unmerited. Thus, the doomsday argument would have the same effect as other challenges and at worst bring down the expected utility by a few orders of magnitude, keeping it astronomically huge.

[1] This reasoning is analogous to the theory uncertainty arguments employed by Ord et al to show that potential existential risks from physics experiments cannot be completely ignored even if we have good arguments purporting to show that they are impossible.

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Existential risk reduction cost effectiveness

Postby Jesper Östman on 2011-01-31T15:50:00

Arepo:
Which of Bostrom's premises do you doubt, and with roughly what certainty?

Existential risk reduction vs meme promotion
1) If I'm understanding your argument correctly it is that meme-promotion can be as effective, or even more effective than, existential risk reduction. I completely agree, personally I believe that the two most important utilitarian issues are existential risk reduction and promotion of memes which will increase the probability of a more happy future. Examples of such memes would be (1) "without conscious happiness there is no value" , (2) "The experiences of all sentient beings (eg. farm animals, wild animals, AIs) are of comparable value" (3) "The more happiness/happy beings, the better".

Why these memes? (1) would be a way to reduce existential risks from future human evolution (we could also describe it as a way of ensuring that the hedons keep reproducing in the long run) where non-conscious or fairly unhappy beings outcompe the happy beings. (2) is there to avoid scenarios where the astronomical amounts of future humans create similar amounts of suffering farm/wild animals <see my post below for a more comprehensive treatment of this issue> (3) is there to avoid scenarios where humans don't colonize space optimally.

Note that the concerns of meme promoting and existential risk reduction depend on each other for their value. The spread of happiness wouldn't matter much if we all die in 50 years. On the other hand, without space expansion, pleasurable consciousness or with proportional animal suffering it wouldn't matter much if the descendants of humanity survives for billions of years.

Since the concerns are very interrelated and overlapping what is risk-reduction and what is happiness promotion is (at least for a utilitarian, who could see scenarios where humanity survives but don't maximize happiness as existential risks) more or less a question of giving useful stipulative definitions.

What should we focus on in practice? This is a hard question obviously, which depends on a lot of empirical details. Generally, it would seem rational to use one's resources where they would make the biggest impact on the probability of the happy space colonization. Some existential risk reduction may be low hanging fruit (eg more ER-research, implementing regulations to prevent synthesizing of genetically engineered pathogens). The efficiency of meme-promotion and the respective importance of promoting eg (1)-(3) is harder to evaluate without a rigorous science of meme-promotion. Perhaps promoting the spread of happiness compared to other "dangerous" values such life/replication is most important since it seems likely that we won't have a lot of suffering animals around in the far future, or that we would forego a continuing space expansion.

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Existential risk reduction cost effectiveness

Postby Jesper Östman on 2011-01-31T16:52:00

Arepo:
According to the experts I've heard, and my own judgment, the anthropogenic risks seem much bigger than all the natural risks taken together.

I used the asteroid/comet risk as an example because it's a case where we relatively easily can get empirically supported probabilities and where we know of concrete countermeasures. The aim was to get a baseline case showing that even relatively inefficient ways of reducing risk are astronomically cost-effective. So if the alternative is doing nothing at all, it would be very worthwhile to use one's resources to promote asteroid defence. If our alternatives aren't constrained in that way it would be a superior alternative to focus on mitigating the anthropogenic risks.

Efficiency of ordinary causes for risk reduction
I agree that reducing global conflicts and also climate change is very important. The main reason for favoring non-standard causes isn't that the other projects are unimportant but that so much resources are already spent on them. A couple million dollars could for example perhaps double the existential risk research field whereas it would be a drop in the ocean when it comes to global poverty reduction, work against climate change or work for peace/nuclear disarmament.

For example, many are skeptical about whether foreign aid has any positive effect at all. However, I do faintly recall a world bank report claiming that each billion spent would roughly increase the GNP of a normal developing country by 1%. Assuming the skeptics are incorrect and that optimal GWWC targeted charities are 500 times more effective than government aid then we could get a 1% increase from a couple million dollars. That would likely at best reduce global risk marginally.

Promoting utilitarianism, or just important related memes?
When it comes to promoting utilitarianism, I think the best strategy would be to promote the components of utilitarianism which are most important for utilitarians. To see this, let us compare three scenarios.
In (A) the world is governed according to common sense ethics, in (B) according to a mix of common sense ethics and the important parts of utilitarianism and in (C) according to pure utilitarianism. Reasonably, assuming the corresponding worlds are shaped according to the ethical positions, the (C) and (B) worlds will contain similar astronomical amounts of happiness, whereas (A) would be relatively worthless. Now, given that utilitarianism is very counter-intuitive for most people (largely because of a bunch of more or less good intuitive objections which a mixed position could avoid) it would seem far easier and more cost effective to aim at moving people's values towards the values (B) rather than pure utilitarianism.

What would such a mixed position look like? Let us assume the mixed view contains (or has as consequences) the position the content of the memes (1)-(3). So happy experiences are necessary for value, and we get more value the more such we have (ideally without any discounting for time/space/amount). Animal experiences are valuable, although perhaps somewhat less so than human experiences. Now the view contains most important utilitarian components. In addition the position could hold common sense views such that sadistic pleasure, and pleasure based on illusion is worthless. This would directly avoid the purported gang-rape and experience machine counter-examples against utilitarianism. Furthermore lots of things could bear value, as long as this is small compared to happy/painful experiences: eg nature, beautiful things, complexity and whatnot. The important thing is that the pleasures we de-value wouldn't likely form an essential part of the happiness in a happy cosmic expansion scenario and that the additional things besides happiness that we value either don't conflict with this goal, or likely won't reach astronomical values. Perhaps we could even include fairness of distribution (an unfair distribution doesn't seem essential to space expansion) and some punishment/desert (the losses here would be low compared to the total happiness). It could even be possible to include some watered-down deontology (and the deontology of common people is watered down), eg that it is wrong to kill, as long as the gains aren't huge (saving 100 happy people eg).

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Existential risk reduction cost effectiveness

Postby Jesper Östman on 2011-01-31T17:30:00

(II) The happiness assumption: future animals
Returning to RyanCareys concern that future humans might detract from the amount of happiness in the universe. In a reply above I argued that the humans themselves will likely be very happy. Here I will investigate whether they are likely cause huge amounts of suffering to non-human creatures. I see two main scenarios where future humans would cause in the first (S1) for billions of years the humans keep farming and eating animals, which are raised under or similar or worse conditions than contemporary factory farming. In the second (S2) they keep huge amounts of (by assumption) suffering wild-life. It is not just that wild animals and farm animals are kept alive on earth, but they are brought to each or most of all the billion of new colonies.

Personally, I think it is unlikely that farm animals will be kept around for that long for a few reasons. Advanced in-vitro-meat will likely be developed at least within a thousand years. Such meat would be superior to meat from farmed animals in several ways: (1) it would be a lot cheaper and more resource effective (2) it would taste better (3) it would be healthier (4) it would be seen as more ethical. The better technology we get the more important (1) would be, since the matter and energy used for raising farm animals could be used for much more valuable things (eg nano-tech, super-computers, robotics). The only reasons for paying such a high price for keeping animals around would be either very strong preferences for traditional living (which doesn't seem likely in a space-faring civilisation) or for making animals suffer.

Furthermore, if humans upload, or the overwhelming majority of sentient beings become non-biological in some other way that would mean a prompt end to meat consumption.

What about wild animals? The main risks here are perhaps that future humans would spread huge amounts of (largely suffering) wild-life to the new star-systems they colonize or, assuming futuristic physics, create new universes filled with life. If one finds such scenarios likely it would perhaps be more valuable to spread memes about the importance of wild animals suffering - or about theodicy-like responsibilities of not creating huge amounts of suffering lives. Or risk reduction should be complemented by such meme promotion.

Personally, I find it unlikely that future humans would create astronomical amounts of wildlife for similar reasons I don't believe they'd eat meat. (1) it would extremely expensive to use whole planets as wild-life parks (2) simulations could give a superior, and astronomically cheaper, experience of wildlife.

The second concern here, (2), introduces a new and perhaps more likely problem. It is possible that future beings would keep around massive simulations which could potentially be filled with suffering minds for entertainment, experiment or other purposes. The same remedies as for the wild-animal case are likely the way to go here.

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Existential risk reduction cost effectiveness

Postby Hedonic Treader on 2011-04-20T20:32:00

Some points that may affect the probability of the happiness assumption:

I'd suspect a) conflicts over limited resources, and b) power concentration as abstract problems with the potential to cause amounts of suffering that may scale with population and complexity, including complexity of available technology.

a) The conflict problem is exemplified in predator/prey relationships in ecosystems as well as in wars between tribes, nations, and maybe future interstellar entities. The future risks here lie in astronomically large-scaled war or game-theoretic arms races like mutually assured mass supertorture between conflicting entities. The probability of such scenarios may be higher than we think. The beginning of colonization may be a crucial tipping point in the future, deciding whether or not conflicting large-scale entities can come into existence and what degree of evolutionary freedom vs. enforced coordination applies from the start.

Potential anhedonic selection pressures between replicating entities such as uploads, clones etc. might also occur in any system that doesn't suppress free evolution and allows replicating entities to reach carrying capacity of the resource base, given the assumption that there's even the slightest adaptive advantage in experiencing less-than-zero affect when losing out on replication resources. Once we understand how minds in general work, it may be worthwhile to find out if there could be mind designs that always produce net-positive observer moments despite this problem, and whether evolutionary pathways leading away from this design principle can be somehow blocked on a fundamental level (improbable, I would assume).

b) The power concentration problem is exemplified in dictatorships and oligarchies, as well as factory farming. Equivalent future risks here would come from a totalitarian anhedonic expansion process that holds a tight technology-driven grip on the sentients that are forced to be components of the system. The probability of this seems relatively low since mind-control technologies could actually make torture obsolote in many ways. An expansion system of disposable superhappy slaves without personal rights may sound dystopic, but the utility landscape may still be net-positive - unless the decision-making entities of the system are sadistic or indifferent.

Sadism seems unlikely unless an initial lock-in is originated by a dominant alpha-male psychology that represents the causation of suffering as a symbol of hierarchical status, which is valued. Absolute indifference to the well-being of other sentients is also not terribly likely since the decision-making entities will initially originate from human systems, and almost all humans have at least some level of benevolence bias, all other things considered equal. However, this point may be undermined if initial hedonic enhancement of decision-making entities strips them of vital aspects of compassion - e.g. if they are unable to comprehend that observer moments can be below zero hedonistically, they may not intuitively understand the necessity of preventing certain mental states at all while designing their de-facto totalitarian system. This may also lead to a scenario where they create pocket universes or sentient simulations containing suffering, without being successfully convinced by memes that highlight the ethical importance of not creating them.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am


Return to General discussion