What should negative-leaning utilitarians make of x riskers?

Whether it's pushpin, poetry or neither, you can discuss it here.

What should negative-leaning utilitarians make of x riskers?

Postby Brian Tomasik on 2013-02-24T06:04:00

Update, 19 Mar 2013

Upon reading more about this topic, I'm less convinced by the argument in the following essay, and I'm more worried that the extinction-risk-reduction community may indeed be causing astronomical amounts of net suffering in expectation. I hope to learn more in order to update my assessment, but for now, I do worry when I see people move toward extinction-risk reduction.

Summary

I think space colonization is more likely to increase suffering than to decrease it, because it's easier to create new computational resources than to reduce those that already exist, and the existence of massive computing power has a decent chance of leading to some hellish outcomes. That said, it's not necessarily true that efforts to promote extinction-risk reduction are net harmful. This is because, while such efforts do affect the probability of colonization somewhat, they also affect the character of the civilization that does the colonizing. Many extinction-risk measures may help to prevent very nasty futures more than their impact on whether a future happens at all. This may not always be true -- e.g., with physics disasters -- but in most cases, it seems that suffering reducers might not be especially opposed to the actual projects being worked on by the extinction-risk reducers.

That said, the topic remains unclear in my mind, and the expected value of further research remains high. Regardless of the sign of extinction-risk reduction, I will continue to encourage pro-colonization folks to work on projects more targeted at reducing future suffering, such as promoting antispeciesism, opposition to wild-animal suffering, and concern for suffering artificial sentients.

The above argument is weakened if you believe that many potential futures will be dominated by Darwinian forces out of human control, because in that case, social instability / barbarity matter less to the badness of the future. I think a singleton is likely enough that concerns about civilization's humaneness still matter.

Introduction

Utilitarians who take suffering very seriously worry that a future in which humans colonize space and create vast computational resources would be very bad, because it would entail a significant chance of simulating huge numbers of suffering experiences.

But the utilitarian and effective-altruist movements also contain many members who want to create vast amounts of computing power in order to simulate fun, interesting, or happy futures. They focus on the positive outcomes and don't give significant weight to the figurative and literal hells that will likely be created along for the ride. These people tend to work on reducing extinction risks and otherwise preparing the ground for astronomical amounts of future computation.

What should negative-leaning utilitarians make of their friends and allies who want to colonize space?

How bad would colonization be?

If I had a button to prevent post-humans from colonizing the galaxy and I had to choose right now whether to press it, I would press it. I'd give something like a ~70% chance that it would be net bad for post-humans to colonize space, other things being equal. This is because computational power is dangerous to have lying around, inasmuch as some fraction of it will probably be used to simulate suffering, torture, or worse. There's some chance things could get very ugly, and getting ugly with astronomical amounts of computing power at your disposal is not a pleasant thought. There are also risks of spreading wildlife into space and in sentient simulations, creating suffering subroutines, sadism, triumph of savage ideologies like fundamentalist religion, etc.

There are some scenarios where space colonization might be good, and these are the reason I maintain ~30% probability that I would welcome it. Compassion may be rare among advanced civilizations, since it either may never have evolved or it might have been selected away during power struggles in which the most ruthless survived. If alien civilizations are causing massive suffering to sentients, we might be able to trade with them or intervene to stop them. That said, this possibility seems somewhat remote because I'm not sure that sentience is very common in the universe. The particular kind of conscious recognition of emotion that Earth-based animals have may be one of a number of possible cognitive-motivational systems, and we may not care about the other kinds.

If there are suffering wild animals within reach, we could conceivably reduce their suffering through "cosmic rescue missions," although it seems more plausible that post-humans would spread wild-animal suffering than that they'd prevent it. Assuming life is hard to get started, then there are many more planets that could be fertilized with life than that are already, and even where life already exists, it's probably nonsentient, like plants and bacteria.

Finally, there might be undiscovered physics or other unknown considerations that would allow for preventing massive suffering given a galactic post-humanity. But at the same time, there's the (probably greater) chance of physics allowing for multiplication of massive suffering, such as by creating universes in a lab. So once again, the sign could be in either direction, but it seems more likely to be in the negative direction.

The net sign of space colonization is an important question to research, but until we know more, we have to make decisions based on our best guess for now, and right now, my best guess is that space colonization would cause a big net increase in suffering.

Are efforts to increase colonization bad?

It might seem like the answer to this question would follow directly from the above discussion, but it doesn't because, even if colonization is bad other things being equal, in fact, other things are not equal. The pro-colonization people do many activities that may not be as unfortunate as colonization itself would be.

Colonization supporters usually focus on reducing "existential risks," because they accept Bostrom's suggestion in "Astronomical Waste" that what's more important than hastening a colonization future is making sure that it can happen at all, which means not letting technology stall permanently and not letting humanity go extinct.

How likely is extinction?

Some doomsayers put the risk pretty high. Chances of extinction in the next few centuries have been estimated as
  • 50% by Martin Rees in Our Final Hour
  • 30% by John Leslie in The End of the World
  • at least 25% by Nick Bostrom in "Existential Risks."
I assume these estimates don't account for the Doomsday Argument, although this is probably okay because I'm doubtful that the Doomsday Argument applies, assuming modal realism.

In the "Global Catastrophic Risks Survey" of 2008, the cumulative risk of extinction was estimated on average as at most 19%, the highest two subcomponents being AI risk and nanotech risk at 5% each. I personally think the AI-risk number is too low, but everything else seems roughly correct.

In one LessWrong comment, Carl Shulman walked through non-AI risk scenarios and argued that none of the others had significant probability of causing extinction. One exception was with nanotech, where he said that while he was not convinced it was a big deal, "Others disagree (Michael Vassar has worked with the CRN, and Eliezer often names molecular nanotechnology as the x-risk he would move to focus on if he knew that AI was impossible)."

There may remain "unknown unknown" risks that are quite severe but haven't been considered yet. If the extinction-risk community identifies them, then the upper bound for their impact would be bigger than we would have expected based on just the above numbers. On the other hand, such risks may be very hard to discover, or may be most likely discovered in ways other than targeted exploration of extinction risks.

Catastrophic risks are often not existential

My core idea in this section is that even if I would press a magic button to prevent colonization ceteris paribus, the actual work that people do to advance colonization may often not be so bad.

For example, take the Global Catastrophic Risk Institute. Most of its research areas center on non-extinction-scale topics: "Emerging Technologies; Environmental Change [I'm not such a fan of work in this area]; Financial Collapse; Governance Failure; Infectious Disease; and Nuclear War." The cumulative extinction risks due to these is probably quite small. On the other hand, if these things did happen, they could have significant affects on the degree of compassion that the future contains. Wars, religion, bad governments, pandemics, etc. could lead to a world where people are more selfish, more tribalistic, more scared, more aggressive, and less focused on being humane. If such a civilization colonized space, the result could be much worse than if a nicer civilization colonized space. In the extreme cases, a nasty world might lead to large-scale sims of hellish conditions due to ideology or warfare, which could be many times worse than the suffering that would result from a friendlier civilization. I'm not suggesting that working on these risks is the most efficient way to prevent bad futures, and indeed, it's not obvious that the expected impact is actually to reduce rather than increase future suffering. I hope we can research this more going forward. But right now, some of GCRI's activities seem like they could go either way.

The key point here is that the worst futures from Earth-originating life could be many times worse -- orders of magnitude worse -- than the median-case bad futures. So our attention will be drawn toward averting the worst of the worst colonization scenarios, rather than focusing on whether colonization happens at all.

Say we could affect the probability of colonization happening by 10^-6 percent. With the same resources, maybe we could reduce the expected suffering of the outcome given colonization by 2 * 10^-6 percent. Assuming the baseline probability of colonization is at least 50%, then the latter strategy is better. Now, there's still work to be done to compute these probabilities, but I'm just highlighting the way things could turn out.

Actually, in the above paragraph, I neglected the possibility that a colonization-level civilization could itself be good. Above I suggested ~30% for this probability, so the actual expected benefit of preventing colonization is only (70%-30%) = 40% times the probability of doing so. In that case, the relevant comparison for an existential-risk intervention is, on the negative side, (~40%) * (increase in probability of colonization) vs., on the positive side, (average fractional reduction in badness of colonization) * (probability of colonization).

For example, say that bad colonization is -1. In expectation, because colonization could be good, the expected value is 0.3 - 0.7 = -0.4. Now, consider an intervention like trying to prevent nuclear winter. The probability of permanent tech standstill due to nuclear winter is probably pretty small -- say 0.004 including model uncertainty. But for every permanent tech standstill you prevent, you also prevent, say, ~40 instances of nuclear destruction that cause significant harm but not tech cessation. Say the probability of colonization is 0.2 and that a future without nuclear destruction is in expectation 5% less barbaric than one with it. The probability of survivable nuclear destruction is 0.004 * 40 = 0.16. Then we have

(-0.4)(0.004) + (0.16 * 5%)*(0.2) = -0.0016 + 0.0016 = 0.

I rigged the numbers to come out this way, but I think the estimates seem pretty plausible. In any event, I didn't count the possibility that anti-nuclear efforts might have humaneness benefits even in scenarios where there isn't a realized nuclear disaster.

Does 5% sound too high for the reduction in barbarity of the future by preventing nuclear destruction? Maybe the median case would be smaller than 5% -- society might bounce back to humane values pretty robustly or might not be overly affected in the first place. But I think 5% is reasonable or even conservative when we count the tail risks. Say there's a 5% chance that nuclear war would cause the future to have suffering of -2 instead of -1. Then averting nuclear war reduces the expected badness by 5% * ( (-2) - (-1) ) = 5% of the baseline average value. Or maybe it prevents a 0.5% risk that the future would be -11 instead of -1. And so on. The increased badness of the future could get really steep in the tail of the probability distribution.

More existential risks

Another example: The Machine Intelligence Research Institute (MIRI). Maybe its most significant impact so far has not been in the area of AI but in the area of rationality, such as the creation of LessWrong. Ignoring colonization considerations, rationality promotion of this kind may be positive, because it improves people's effectiveness at doing good in the world, and the LessWrong folks often promote utilitarian-style thinking (not as much as we might wish, but at least more than most intellectuals do). Insofar as this reduces suffering and helps create more humane futures, this is a desirable impact. Insofar as this leads to more extinction-risk reduction, this may not be a good impact.

In the longer term, if MIRI continues in the direction of shaping AI-safety research, then its bigger eventual impacts may lie there. Is solving the AI control problem good or bad? This again is unclear -- I'm not sure if a paperclipper would be better or worse than a "friendly AI."

On the one hand, the paperclipper would perhaps have fewer moral restraints against sentient simulations, extortion, suffering subroutines, and the like. On the other hand, a dumb enough paperclipper might just turn the solar system to paperclips (or whatever it was optimizing) and be done with it, rather than simulating conscious minds, creating lab universes, and leaving open the possibility of bad outcomes down the road the way a friendly AI would. (A smart paperclipper would also have incentive to create lab universes for the paperclips they contain.) Is a dumb or smart paperclipper more likely? Keep in mind that it might not be a single agent but maybe a byproduct of interacting systems/agents. Anyway, right now I don't have a strong opinion about which is better, so my current estimate for the value of MIRI's work is that it's at least not very negative in expectation. This could change upon further thinking about possible scenarios.

Similar suggestions might apply for Future of Humanity Institute -- they promote utilitarian-style thinking, and their research isn't obviously net harmful. They may do a little to increase the chance of colonization through impacts on extinction risks, but like with GCRI, the specific interventions involved here may not be net bad when you consider their potential to improve the future in those scenarios where survival happens either way.

There are scenarios in which colonization would be averted even without extinction -- such as if a totalitarian government put a stop to tech development indefinitely. A friend of mine thinks this is unlikely, because usually those with the most power are also sophisticated enough to realize the importance of technology. In theory, a highly ideological movement could sweep the world and prevent tech development, but empirically, people like this tend not to be as rational as others who might outcompete them. In any event, an anti-tech ideological movement would probably carry along other deleterious baggage (e.g., religious tendencies and attendant barbarity / out-group hatred), which means that if the dictatorship did eventually develop AI after all, the result could be far worse than if a more humane society developed the AI. In other words, efforts to preclude the spread of an anti-tech dictatorship movement could prevent net suffering in expectation.

Another area is nanotech -- e.g., the Foresight Institute or Center for Responsible Nanotechnology. This scenario seems more purely extinction-focused than those mentioned previously, because the problems may be more technological (rather than having as many social spillover effects) and because based on the previous discussion, it seems nanotech is arguably the biggest non-AI extinction risk. So I might be more concerned if utilitarians tended toward this area, but empirically, I don't know many who have done so. In any event, for every possible outcome where nanotech prevents colonization, there are probably many more where it destabilizes society but doesn't permanently stop colonization.

Other risks to consider are asteroids and physics disasters, but these are small enough as to probably not be worth considering. (Is this true even on a per-dollar basis, though? But probably the asteroid stuff would soon be saturated.)

How strong is the "humane civilization" argument?

Above I suggested that for many catastrophic risks, especially those unlikely to prevent colonization in the long run, working to avert them might help to lead to more humane futures, which would be better in expectation even by the lights of suffering reducers. I don't claim this is necessarily true, but it might be true. How strong is this leap from "more stable future" to "lower risk of really bad colonization scenarios"?

In my mind, I'm mainly drawing on the correlation that we observe historically: In the past, society was more impoverished, diseased, and chaotic, and it was also more barbaric, aggressive, and uncaring. It seems plausible that this correlation is more than chance. For example, people who live in a more nurturing environment may be more upset by the thought of suffering and may be less likely to incline toward retribution. Maybe the decline in support for the idea of religious Hell in the West is an indication of this, not to mention the decline of violence.

The Wikipedia entry for The Better Angels of Our Nature cites the following as some of the factors that Steven Pinker believes have reduced aggression:
the emergence of a strong government/authority with a monopoly on violence, [4] the interconnectivity of cultures through the need for trade; increased literacy, urbanisation, mobility and access to mass media - all of which have exposed different cultures to each other - and the spread of democracy.

It seems all of these would be set back by most of the non-extinction-level disasters discussed above. For example, in his book review, Peter Singer suggests that climate change "could mean the end of the relatively peaceful era in which we are now living," at least in those regions most affected by climate fluctuation. (On a global scale, Singer's statement may be hyperbole except for the tail risk of global-warming scenarios. In any event, only the tail scenarios are relevant for existential considerations.)

The argument still needs a jump from "violence here and now" to "more barbaric outcomes in post-colonization simulations," but such a leap seems possible, assuming that savage ideologies and unconcern for suffering take time to disappear. It's also remotely possible if not very likely that the relatively compassionate society in which we live now is a rare and precious accomplishment that wouldn't necessarily reproduce itself if civilization were disrupted and then had to re-evolve its technological prowess.

On the other hand, is it possible that some amount of disruption in the near term could heighten concern about potential future sources of suffering, whereas if things go along smoothly, people will give less thought to futures full of suffering? This question lies in analogy with the concern that reducing hardship and depression might make people less attuned to the pain of others. Many of the negative-leaning utilitarians that I know have gone through severe trauma or depression at one point, and I think this is more than coincidence.

My guess about this concern is that while some amount of suffering may be important for empathy, socially disruptive events could still be net harmful because unstable conditions don't allow people to have room to reflect on their moral obligations toward others; they're too worried about saving themselves. And empirically, it seems to be the case that more socially disruptive environments are correlated with more barbaric ideologies. This is true both at a macro level (e.g., harsh religions and ideas about revenge) and an individual level (e.g., often abusive people grew up in abusive environments, although genetics can play a role there too).

Closing words against hostility

This discussion points out subtleties in the debate between suffering reducers and space colonizers. Sometimes these camps may agree, and sometimes they may disagree. But at no point should the camps become hostile toward one another. I think we should encourage space colonizers to work on ways to reduce suffering in the future that are more targeted and less likely to have baleful consequences (e.g., promoting antispeciesism, opposition to wild-animal suffering, concern for suffering sentient simulations, etc.), but we should still ally with even those who don't join us. We share many utilitarian underpinnings, and we can learn a great deal from one another's research and friendship.

If the suffering reducers became antagonistic and provoked hatred, this would hurt our cause. We should encourage the space colonizers to think twice about the suffering that may result from what they're working toward, but we should not become hostile. Doing so could lead to even worse outcomes than when we started, and it would vitiate chances for a collaborative effort to reduce future suffering hand-in-hand. Let's remain peaceful and cooperative, regardless of how the conclusions of the above questions play out.

Positive bias?

I hope my suggestions in this post aren't biased by the way I'd like the analysis to come out. The argument seems potentially compelling, but I'm also unsettled. I think probably the devil will be in the details: In some cases, catastrophic-risk reduction will indeed have the primary effect of making colonization more humane, and in other cases, it will have a bigger (maybe primary) effect of increasing the probability of colonization. So this isn't a blanket endorsement of everything our pro-colonization friends do but more of an attempt to assess their average impact.

Darwinian forces

If tech progress continues, and if humanity doesn't create a singleton, then it seems likely that Darwinian forces beyond our control will outcompete us and ignore our values. In such a scenario, then arguably, the relative humaneness vs. barbarity of the humans in such a world wouldn't matter (or would matter very little). So in these branches of the possibilities tree, the suggestion that reducing non-extinction-level catastrophic risks would improve civilization's friendliness presumably doesn't go through.

This weakens the argument in this post somewhat, but only to the extent a singleton isn't likely. Bostrom thinks a singleton is more likely than not. I'm not sure what I think, but I agree that conditional on human survival, a singleton might be >25% likely, which is enough that these concerns about preventing nasty human societies remain quite important.

Alliances with colonization supporters (written 18 Mar 2013)

We who fear the possibly dreadful outcomes of space colonization still stand a lot to gain from allying with colonization supporters -- in terms of thinking about what scenarios might happen, developing outreach strategies, etc. We also want to remain friends because this means pro-colonization people will take our ideas more seriously. Even if space colonization happens, there will remain many sub-questions on which the negative-leaning utilitarians want to have a say: E.g., not spreading wildlife, not creating suffering simulations/subroutines, banning individuals from doing anything they want to their sims, dialing down hostility that could lead to warfare/torture, not creating lab universes, etc.

We want to make sure negative-leaning utilitarians don't become a despised group. For example, think about how eugenics is more taboo because of the Nazi atrocities than it would have been otherwise. Anti-technology people are sometimes smeared by association with the Unabomber. Animal supporters can be tarnished by the violent tactics of a few, or even by the silly antics of PETA. We need to be cautious about something similar happening for suffering reduction. Most people already care a lot about preventing suffering, and we don't want our descendants in the future to say, "Oh, you care about reducing suffering? What are you, one of those negative-leaning utilitarians?" where "negative-leaning utilitarians" has become such a bad name that it evokes automatic hatred. That said, we also need to stick up for our position to some extent or else we won't accomplish anything.

There's one flip side to alliances, though. If we're closely networked with colonization supporters, then if we bring people into our movement, those people may find the pro-colonization folks as well, and they may prefer them over us. I think the negative-leaning side is not more correct than the positive-leaning side in an absolute sense; we just have different intuitions than the positive-leaning side. Therefore, we can't rely on rationality or "the truth" to ensure that people eventually see the light of negative-leaning utilitarianism. The positive-leaning side seems more popular than the negative-leaning side. (It's also more cool to work on "saving the world" and more hopeful to aim for utopian dreams than it is to think about dreary topics like wild-animal suffering or post-humans doing nasty things to each other.) So if we bring in 4 new people to effective altruism, say 3 of them might end up on the positive side, and only 1 on the negative side.

This creates a real puzzle. Is it problematic to promote general utilitarianism / effective altruism? Well, maybe not, because as I argued above, the work of extinction-risk reducers may be not so bad in practice. But that could change upon learning more, and until we learn more, it's a very risky proposition.

I think the next steps for us are to
  1. Think more about whether AGI / colonization would create more suffering than it prevents. (I think probably it would, but I'm not sure.)
  2. Think more about whether what "x risk" groups actually do is beneficial or harmful on balance.
  3. Consider whether we can shift people from the more extinction-focused x-risk work to the more "positive futures"-focused x-risk work.
  4. Think more about what we can do to make the best impact, which might include memes about wild-animal suffering and other dystopic futures, or more general promotion of a negative focus, or other things.
I would also be curious to learn more about what gives rise to different opinions on the negative-vs.-positive question.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: What should negative-leaning utilitarians make of x riskers?

Postby utilitymonster on 2013-02-24T16:01:00

I think you are correct that many things that space colonization advocates recommend would make people nicer, and I think you could say similar things about the recommendations of the utilitarians who focus on more proximate problems, such as global poverty.

I do have a question about a side point:
Utilitarians who take suffering very seriously worry that a future in which humans colonize space and create vast computational resources would be very bad, because it would entail high likelihood of simulating huge numbers of suffering experiences.

It is one thing to claim that these scenarios might come about, but it seems like another thing to say that they have high probability conditional on space colonization. As far as I can tell, this is not something that has been argued for, and I have the opposite view, at least for many precisifications of "huge". I'm not sure how much this matters given your normative perspective--you seem to have views which suggest that even a small chance of very bad outcomes makes it all bad--but I think we should keep this straight.

There are many reasons I can think of why these scenarios might not occur:
  • There might be an effective government in the future, and there might be laws against creating huge amounts of suffering. This could happen for various reasons.
  • Humans might create FAI, and it might not allow this.
  • As you say above, the people who care a lot about this might make trades with those who would create suffering in order to prevent the suffering.
  • Future people might just not have much use for creating entities that suffer for various reasons. It may not be very economically productive.
  • Almost all future resources might be used on entities that are totally unrelatable to us, and have no moral value or disvalue.
There are doubtless many other reasons that huge amounts of future suffering might not happen, which I can't think of at the moment.

utilitymonster
 
Posts: 54
Joined: Wed Jan 20, 2010 12:57 am

Re: What should negative-leaning utilitarians make of x riskers?

Postby Brian Tomasik on 2013-02-25T01:15:00

Thanks, utilitymonster! I appreciate the feedback.

Your quibble with the wording is valid, so I modified the sentence. Yes, there can be a decent fraction of scenarios that don't entail huge suffering, though probably I think this fraction is lower than you do. The more important question is the expected value of suffering, and what I really mean is that the expected amount of suffering is huge. This is because many scenarios would involve mundanely large degrees of suffering -- e.g., spreading wildlife into the galaxy and in simulations, running conscious sims for science and fun, and the possibility of suffering subroutines or maybe lab universes. Then there are even worse scenarios in which savage ideologies, sadism, etc. take control. Given humanity's sympathy for environmentalism and willingness to risk suffering for enjoyment, I think it's decently likely there will be at least a lot of wild-animal and simulated suffering in the future.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: What should negative-leaning utilitarians make of x riskers?

Postby Brian Tomasik on 2013-03-24T08:07:00

A friend of mine makes an interesting point: While destabilization may have many bad consequences if colonization still happens afterward, it might also decrease the chance of colonization not on its own but via its effects on further risks. For example, a biotech virus might not stop tech progress, but it might lead to nuclear war that would, etc. Thus, there are ripple-effect probabilities for preventing colonization that need to be factored in when evaluating the ratio between colonization-preventing outcomes vs. non-colonization-preventing destabilization. This may further weaken the argument in my post.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: What should negative-leaning utilitarians make of x riskers?

Postby Humphrey Schneider on 2013-03-24T22:49:00

I think the safest bet is just to spread awareness of Wild Animal Suffering. We should try to convince x-riskers implicitly that it might be a speciest, selfish act to conserve planet earth in order to ensure the survival of humanity. Then we can start a open discussion on whether humanity had a moral duty to stay alive to reduce suffering troughout the universe or not. I don't know if this reduces x-risk but we might cause that ERR will only be acceptable iff humankind pledges to develop further for the sake of altruist rescue missions that benefit suffering sentients all over the universe.
"The idea of a necessary evil is necessarily the root of all evil"

Humphrey Schneider
 
Posts: 36
Joined: Wed Jan 02, 2013 7:04 pm

Re: What should negative-leaning utilitarians make of x riskers?

Postby Brian Tomasik on 2013-05-22T13:00:00

A friend of mine is writing a paper about the multiple ways in which humans could survive nuclear/asteroid/etc. winter by mining food from various sources. He suggests that it's likely humans could survive even bad winters (5+ years), which jives with what I've heard elsewhere.

Anyway, the paper ends with a call for further investigation into how to convert various energy sources to food. It's interesting that such research could be one of the worst ERR approaches because the argument about "catastrophic risks usually causing sub-extinction-level chaos" doesn't apply here. When food supplies are low, the catastrophe has already happened, and these technologies only allow humans to survive those disasters.

Because starving through nuclear winter seems so unlikely even now, I don't anticipate this is a huge problem, but maybe other ideas like it are worse: e.g., building bunkers, seed banks, and space colonies as insurance against disasters. These don't reduce catastrophes and so don't prevent social dislocation; they just help ensure that humans survive the dislocation.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA


Return to General discussion