A few dystopic future scenarios

Whether it's pushpin, poetry or neither, you can discuss it here.

A few dystopic future scenarios

Postby Brian Tomasik on 2011-12-13T08:50:00

Summary, written 7 Dec 2012:

A growing number of people believes that reducing the risk of human extinction is the single most cost-effective undertaking for those who want to do good in the world. I fear that if humans survive, the future will likely not be as sanguine as is often presumed. I enumerate a few suggestive (not exhaustive) bad scenarios that might result both from human-inspired AIs and from "unfriendly" AIs that might outcompete human values. I conclude that rather than trying to increase the odds of Earth-originating AI at all (which could have negative expected value), we might do better to improve the odds that such AI is of the type we want. In particular, for some of us, this may mean the best thing we can do is shape the values of society in such a way that an AI which develops a few decades/centuries from now will show more concern for the suffering of non-human animals and artificial sentients whose feelings are usually ignored.

(See "Rebuttal by Carl Shulman" at the bottom of this post.)

Introduction, written 6 Dec 2012:

Advocates for reducing extinction risk sometimes assume -- and perhaps even take for granted -- that if humanity doesn't go extinct (due to nanotech, biological warfare, or paperclipping), then human values will control the future. No, actually, conditional on humans surviving, the most likely scenario is that we will be outcompeted by Darwinian forces beyond our control. These forces might not just turn the galaxy into nonsentient paperclips; they might also run sentient simulations, employ suffering subroutines, engage in warfare, and perform other dastardly deeds as defined and described below. Of course, humans might do these things as well, but at least with humans, people make the presumption that human values will be humane, even though this may not be the case when it comes to human attitudes toward wild animals or non-human-like minds.

So when we reduce asteroid or nanotech risk, the dominant effect we're having is to increase the chance that Darwinian-forces-beyond-our-control take over the galaxy. Then there's some smaller probability that actual human values (the good, the bad, and the ugly) will triumph. I wish more people gung-ho about reducing extinction risk realized this.

Now, there is a segment of extinction-risk folks who believe that what I said above is not a concern, because sufficiently advanced superintelligences will discover the moral truth and hence do the right things. There are two problems with this. First, Occam's razor militates against the existence of a moral truth (whatever that's supposed to mean). Second, even if such moral truth existed, why should a superintelligence care about it? There are plenty of brilliant people on Earth today who eat meat. They know perfectly well the suffering that it causes, but their motivational systems aren't sufficiently engaged by the harm they're doing to farm animals. The same can be true for superintelligences. Indeed, arbitrary intelligences in mind-space needn't have even the slightest inklings of empathy for the suffering that sentients experience.

In conclusion: Let's think more carefully about what we're doing when we reduce extinction risk, and let's worry more about these possibilities. Rather than increasing the odds that some superintelligence comes from Earth, let's increase the odds that, if there is a superintelligence, it doesn't do things we would abhor.

The scenarios, written 13 Dec 2011

Robert Wiblin has asked for descriptions of some example future scenarios that involve lots of suffering. Below I sketch a few possibilities. I don't claim these occupy the bulk of probability mass, but they can serve to jump-start the imagination. What else would you add to the list?

Spread of wild-animal life. Humans colonize other planets, spreading animal life via terraforming. Some humans use their resources to seed life throughout the galaxy. Since I would guess that most sentient organisms never become superintelligent, these new universes will contain vast numbers of planets full of Darwinian agony.

Sentient simulations. Given astronomical computing power, post-humans run ancestor simulations (including torture chambers, death camps, and psychological illnesses endured by billions of people). Moreover, scientists run even larger numbers of simulations of organisms-that-might-have-been, exploring the space of minds. They simulate trillions upon trillions of reinforcement learners, like the RL mouse, except that these learners are sufficiently self-aware as to feel the terror of being eaten by the cat.

Suffering subroutines. This one is from Carl Shulman. It could be that certain algorithms (say, simple reinforcement learners) are very useful in performing complex machine-learning computations that need to be run at massive scale by advanced AI. These subroutines might become sufficiently similar to the pain programs in our own brains that they actually suffer. But profit and power take precedence over pity, so these subroutines are used widely throughout the AI's Matrioshka brains. (Carl adds that this situation "could be averted in noncompetitive scenarios out of humane motivation.")

Ways forward, written 5 Dec 2012

If indeed the most likely outcome of human survival is to create forces with values alien to ours, having the potential to cause astronomical amounts of suffering, then it may actually be a bad thing to reduce extinction risk. At the very least, reducing extinction risk is less likely to be an optimal use of our resources. What should we do instead?

One option, as suggested by Bostrom's "The Future of Human Evolution," is to work on creating a global singleton to reign in Darwinian competition. Obviously this would be a worldwide undertaking requiring enormous effort, but perhaps there would be high leverage in doing preliminary research, raising interest in the topic, and kicking off the movement.

Doing so would make it more likely that humans, rather than minds alien to humans, control the future. But would this be an improvement? It's hard to say. While unfriendly superintelligences would be unlikely to show remorse when running suffering simulations for instrumental purposes, it's also possible that humans would run more total suffering simulations. The only reasons for unfriendly AIs to simulate nature, say, are to learn about science and maybe to explore the space of minds that have evolved in the universe. In contrast, humans might simulate nature for aesthetic reasons, as ancestor simulations, etc. in addition to the scientific and game-theoretic reasons that unfriendly AIs would have. In general, humans are more likely to simulate minds similar to their own, which means more total suffering. Simulating paperclips doesn't hurt anyone, but simulating cavemen (and cavemen prey) does.

So it's not totally obvious that increasing human control over the future is a good thing either, though the topic deserves further study. The way forward that I currently prefer (subject to change upon learning more) is to work on improving the values of human civilization, so that if human-shaped AI does control the future, it will act just a little bit more humanely. This means there's value in promoting sympathy for the suffering of others and reducing sadistic tendencies. There's also value in reducing status-quo bias and promoting total hedonistic utilitarianism. Two specific cases of value shifts that I think have high leverage are (1) spreading concern for wild-animal suffering and (2) ensuring that future humans give due concern to suffering subroutines and other artificial sentients that might not normally arouse moral sympathy because they don't look or act like humans. (2) is antispeciesism at its broadest application. Right now I'm working with friends to create a charity focused on item (1). In a few years, it's possible I'll also focus on item (2), or perhaps another high-leverage idea that comes along.

In his original paper on existential risk, Bostrom includes risks not just about literal human extinction, but also risks that would "permanently and drastically curtail" the good that could come from Earth-originating life. Thus, my goal is also to reduce existential risk, but not by reducing extinction risk -- instead by working to make it so that if human values do control the galaxy, there will be fewer wild animals, subroutines, and other simulated minds enduring experiences that would make us shiver with fear were we to undergo them.

Rebuttal by Carl Shulman, written 8 Dec 2012:

Carl wrote a thorough response to this piece in a later comment.

Brian's response, written 8 Dec 2012:

Brian wrote a reply to Carl. It included the following conclusion paragraphs.

Most of Carl's points don't affect the way negative utilitarians or negative-leaning utilitarians view the issue. I'm personally a negative-leaning utilitarian, which means I have a high exchange rate between pain and pleasure. It would take thousands of years of happy life to convince me to agree to 1 minute of burning at the stake. But the future will not be this asymmetric. Even if the expected amount of pleasure in the future exceeds the expected amount of suffering, the two quantities will be pretty close, probably within a few orders of magnitude of each other. I'm not suggesting the actual amounts of pleasure and suffering will necessarily be within a few orders of magnitude but that, given what we know now, the expected values probably are. It could easily be the case that there's way more suffering than pleasure in the future.

If you don't mind burning at the stake as much as I do, then your prospects for the future will be somewhat more sanguine on account of Carl's comments. But even if the future is net positive in expectation for these kinds of utilitarians (and I'm not sure that it is, but my probability has increased in light of Carl's reply), it may still be better to work on shaping the future rather than increasing the likelihood that there is a future. Targeted interventions to change society in ways that will lead to better policies and values could be more cost-effective than increasing the odds of a future-of-some-sort that might be good but might be bad.

As for negative-leaning utilitarians, our only option is to shape the future, so that's what I'm going to continue doing.


Why a post-human civilization is likely to cause net suffering, written 24 Mar 2013:

If I had to make an estimate now, I would give ~75% probability that space colonization will cause more suffering than it reduces. A friend asked me to explain the components, so here goes.

Consider how space colonization could plausibly reduce suffering. For most of those mechanisms, it seems at least as likely that they will increase suffering. The following sections parallel those above.

Spread of wild-animal life

David Pearce coined the phrase "cosmic rescue missions" in referring to the possibility of sending probes to other planets to alleviate the wild extraterrestrial (ET) suffering they contain. This is a nice idea, but there are a few problems.
  • We haven't found any ETs yet, so it's not obvious there are vast numbers of them waiting to be saved from Darwinian misery.
  • The specific kind of conscious suffering known to Earth-bound animal life may be rare. Most likely ETs would be bacteria, plants, etc., and even if they're intelligent, they might be intelligent in the way robots are without having emotions of the sort that we care about.
  • Space travel is slow and difficult.
  • It's unclear whether humanity would support such missions. Environmentalists would ask us to leave ET habitats alone. Others wouldn't want to spend the resources to do this unless they planned to mine resources from those planets in a colonization wave.
Contrast this with the possibilities for spreading wild-animal suffering:
  • We could spread life to many planets (e.g., Mars via terraforming, other Earth-like planets via directed panspermia). The number of planets that can support life may be appreciably bigger than the number that already have it. (See the discussion of f_l in the Drake equation.)
  • We already know that Earth-bound life is sentient, unlike for ETs.
  • Spreading biological life is slow and difficult like rescuing it, but disbursing small life-producing capsules is easier than dispatching Hedonistic Imperative probes or berserker probes.
  • Fortunately, humans might not support spread of life that much, though some do. For terraforming, there are obvious survival pressures to do it in the near term, but probably directed panspermia is a bigger problem in the long term, and that seems more of a hobbyist enterprise.
Sentient simulations

It may be that biological suffering is a drop in the bucket compared with digital suffering. Maybe there are ETs running sims of nature for science / amusement, or of minds in general for psychological, evolutionary, etc. reasons. Maybe we could trade with them to make sure they don't cause unnecessary suffering to their sims. If empathy is an accident of human evolution, then humans are more likely empathetic than a random ET civilization, so it's possible that there would be room for improvement through this type of trade.

Of course, post-humans themselves might run the same kinds of sims. What's worse: The sims that post-humans run would be much more likely to be sentient than those run by random ETs because post-humans would have a tendency to simulate things closer to themselves in mind-space. They might run ancestor sims for fun, nature sims for aesthetic appreciation, lab sims for science experiments, pet sims for pets. Sadists might run tortured sims. In paperclip-maximizer world, sadists may run sims of paperclips getting destroyed, but that's not a concern to me.

Finally, we don't know if there even are aliens out there to trade with on suffering reduction. We do, however, know that post-humans would likely run such sims if they colonize space.

Suffering subroutines

A similar comparison applies here as far as humans likely being more empathetic than average, but humans also being more likely to run these kinds of things in general. Maybe the increased likelihood of humans running suffering subroutines is less than of them running sentient simulations because suffering subroutines are accidental. Still, the point remains that we don't know if there are ETs to trade with.

What about paperclippers?

Above I was largely assuming a human-oriented civilization with values that we recognize. But what if, as seems mildly likely, human colonization accidentally takes the form of a paperclip maximizer? Wouldn't that be a good thing because it would eliminate wild ET suffering as the paperclipper spread throughout the galaxy, without causing any additional suffering?

Maybe, but if the paperclip maximizer is actually generally intelligent, then it won't stop at tiling the solar system with paperclips. It will have the basic AI drives and will want to do science, learn about other minds via simulations, engage in conflict, possibly run suffering subroutines, etc. It's not obvious whether a paperclipper is better or worse than a "friendly AI."

Evidential/timeless decision theory

We've seen that the main way in which human space colonization could plausibly reduce more suffering than it creates would be if it allowed us to prevent ETs from doing things we don't like. However, if you're an evidential or timeless decision theorist, an additional mechanism by which we might affect ETs' choices is through our own choices. If our minds work in similar enough ways to ETs', then if we choose not to colonize, that makes it more likely / timelessly causes them also not to colonize, which means that they won't cause astronomical suffering either. (See, for instance, pp. 14-15 of Paul Almond's article on evidential decision theory.)

It's also true that if we would have done net good by policing rogue ETs, then our mind-kin might have also done net good in that way, in which case failing to colonize would be unfortunate. But while many ETs may be similar to us in failing to colonize space, fewer would probably be similar to us to the level of detail of colonizing space and carrying a big stick with respect to galactic suffering. So it seems plausible that the evidential/timeless considerations asymmetrically multiply the possible badness of colonization more than the possible goodness of it?

Black swans

It seems pretty likely to me that suffering in the future will be dominated by something totally unexpected. This could be a new discovery in physics, neuroscience, or even philosophy more generally. Some make the argument that because we know so very little now, it's better for humans to stick around for the option value: If they later realize it's bad to spread, they can stop, but if they realize they should, they can proceed and reduce suffering in some novel way that we haven't anticipated.

Of course, the problem with the "option value" argument is that it assumes future humans do the right thing, when in fact, based on examples of speculations we can imagine now, it seems future humans would probably do the wrong thing most of the time. For instance, faced with a new discovery of obscene amounts of computing power somewhere, most humans would use it to run oodles more minds, some nontrivial fraction of which might suffer terribly. In general, most sources of immense power are double-edged swords that can create more happiness and more suffering, and the typical human impulse to promote life/consciousness rather than to remove them suggests that negative and negative-leaning utilitarians are on the losing side.

Why not wait a little longer just to be sure that a superintelligent post-human civilization is net bad in expected value? Certainly we should research the question in greater depth, but we also can't delay acting upon what we know now, because within a few decades, our actions might come too late. Tempering enthusiasm for a technological future needs to come soon or else potentially never.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Gedusa on 2011-12-13T09:50:00

In the "Sentient Simulations" category, you've missed out superintelligent AI's simulating lot's of beings which suffer in order to predict the future well. You've also missed the possibility that humans already simulate conscious beings when they want to predict someone's behavior - which I find pretty terrifying, given how many humans there are and how many humans really dislike other humans and daydream about causing them suffering.

A gem I came across only recently was Robin Hanson's brief discussion on "Conditional Morality":
Our evolved moral intuitions are context dependent. We are built to be nicer to each other when times are good, to invest in an attractive reputation. We are also built to form alliances with some in order to counter threats by others; the further in social distance are the threats we perceive, the wider a circle of allies we collect in response. Since we are now richer and have interactions with more distant others, we are nicer to a wider range of allies....

These theories make different predictions about futures where we become poorer and our interactions become more local... the conditional morality theory predicts that the social circle to whom we are nice would narrow to the range of our ancestors with similar poverty and interaction locality.

So if Hanson's singularity comes true, then we might expect humans to be less caring about other people - due to care effectively being something which only appears when people are very wealthy. Indeed, we might expect this in any scenario where per capita wealth drops.

And I think that those scenarios are terrifying and I'd really, really like to see the guys over at FHI/SI do some research on how likely they are to happen, if the risk is high... a big extinction taking out the biosphere as well as us is in order.
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2011-12-13T10:08:00

Thanks, Gedusa! I'm (not) glad to have extra bad outcomes added to the list. :)

I don't worry about creating conscious minds when I predict others' behavior, because it seems as though the feelings of those minds loop back onto my own feelings (which is what gives rise to my empathy, etc.). But it is a thought worth exploring. In principle there's no reason this looping-back of emotions should happen, so AIs might very well do away with it to avoid bogging themselves down with mercy for the suffering of others.

Nice quote about conditional morality. What's more, it's plausible that whatever force takes over our world will have no morality whatsoever. Or it might have something it considers "morality" but that we find evil (e.g., Nazi torture or religious punishment).
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby DanielLC on 2011-12-13T20:00:00

You've also missed the possibility that humans already simulate conscious beings when they want to predict someone's behavior


"Please don't wake up. I don't want to die"
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2011-12-14T03:02:00

Sentient interstellar replicators. Somewhat between spreading wildlife and sentient AI subroutines. A wildlife-like Darwinian process doesn't need to be restricted to biological animals inhabiting planets. Imagine the launch of an artificial interstellar self-replicating probe that uses cosmic resources to copy itself with variation. This could lead to very different phenotypes with parasitic, predatory, pioneering etc. survival strategies. Such a replication process, once launched, could even spread to other galaxies.

If the decision-making algorithms of these entities are partially driven by suffering-like error signals, they could have negative hedonistic value experiences. The scope could be very vast, hedonistic quality control could be problematic due to the openly Darwinian nature of the ecosystems (low energy = starvation signal, integrity violation = pain signal, possible threat detection = fear signal etc). The difference to sentient subroutines is that these entities don't need to be a part of generalized super-AI, they could be individualistic, bound to individual physical phenotypes, comparatively simple, and competitive in a Darwinian sense.

[Hypothetically, the first probes could contain non-mutation strategies to prevent a Darwinian process. Hypothetically, they could operate on gradients of bliss if feasible. Hypothetically, they could be the substrate of choice for a utilitronium shockwave.]
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2011-12-14T03:22:00

Hedonic Treader wrote:The scope could be very vast, hedonistic quality control could be problematic due to the openly Darwinian nature of the ecosystems

Fascinating. Yes indeed.

Hedonic Treader wrote:The difference to sentient subroutines is that these entities don't need to be a part of generalized super-AI, they could be individualistic, bound to individual physical phenotypes, comparatively simple, and competitive in a Darwinian sense.

Good point. I wonder whether these are more or less common across the multiverse than are suffering subroutines.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2012-04-09T02:39:00

Personal sadism and local power concentration. In our current world, people insist on their private spheres free from surveillance. This extends at least to homes and personal computers in free societies. However, at the same time, we allow these people to have children, "own" pets, and program arbitrary algorithms within these unsupervised private realms. Predictably, despite being illegal, children are abused, pets are tortured, and computer algorithms... well, hopefully they're not sentient yet. People assure each other both the right to such privacy and the right to have near absolute power over other sentient beings in these spheres, even though the abuse is illegal if detected. It is possible that a similar power distribution principle will be translated to a posthuman future, where individual entities with local absolute power assure each other freedom from meddling, while powerless third parties are affected within each private locale. (It may even be a majority of sentients who find themselves in the powerless category.) One might hope that abuse for instrumental purposes (like political torture) will be obsolete, but not all abuse is instrumental; current humans can derive great pleasure from hurting others. Sadism is a huge part of the human condition, sexual and otherwise, and it seems plausible that these torturous as well as privacy-seeking psychological tendencies may find a translation into posthuman nature. Due to the large scale of such a civilization, this could become a huge numerical problem if not addressed properly.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-04-09T06:25:00

Personal sadism is one of my worries as well. Homo sapiens enjoyment systems can be pretty messed up. Just search for {torture sims} and see what horrifying things we humans have fun doing. (There are too many examples to mention here. :? )
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-06-02T08:13:00

The below is copied from a Facebook discussion. I thought I'd include it here as well to keep everything in the same place.

-------
Even if things go roughly according to the normal state of affairs that we see now, the outcome could be bad if humans who don't share our utilitarian values want to spread nature into the cosmos. Of course *we* would prefer that the nature not have lots of suffering, but not everyone feels this way. (Ned Hettinger: "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.")

Even though factory farming will one day be abolished (except in ancestor simulations?), there may be other forms of enslavement or brutal treatment that are driven by economics. For example, suppose that a certain form of negative reinforcement learning proved especially useful for computational purposes, and this learning process was sufficiently sophisticated that we regarded it as suffering. Would post-humans care about it enough to use other, more expensive algorithms?

"as people get richer - which most economists prognose as the most probable for the world at large"
But some like Robin Hanson argue that in the far future, uploads will almost certainly expand their populations until they once again hit subsistance levels. Granted, Hanson himself is not pessimistic about this, but I'm not sure we can be confident about his sanguine attitude. For example, what if minimizing on pleasure is cheaper?

The worst possible outcomes would likely result if things spiral out of our control. The future is very likey to be determined by Darwinism as much as the past has been, and it's quite plausible that everything humans value will be wiped out by agents that can out-compete us. They needn't care about being humane to their reinforcement-learning algorithms, or even to each other (cf. what many animals do to their rivals in the wild). Maybe wars would break out. Maybe one group would seize control and rule the universe by force.

Empathetic sympathy is not universal among animals -- many animals don't show sympathy to non-relatives of the same species, and practically no predators feel bad about eating their prey. [Edited: AIs may have game-theoretic reasons to cooperate with other AIs comparably powerful, but empathy for the powerless (e.g., a suffering minnow in Nigeria) seems maladaptive in the long run unless social pressures preserve this stance as a fitness-enhancing trait.]

These later scenarios that I've been painting would certainly fall into the category of "existential risk" by Bostrom's definition -- they are bad outcomes that we wish to avoid. However, the risk of these possibilities is actually increased when we reduce "extinction risk," because these can only happen if humans survive long enough to develop strong AI. If the probability of one of these things given survival is p, then every 1% by which we reduce the risk of human extinction in other ways, we increase the risk of these by p * 1%.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby DanielLC on 2012-06-03T00:35:00

The future is very likey to be determined by Darwinism as much as the past has been, and it's quite plausible that everything humans value will be wiped out by agents that can out-compete us. They needn't care about being humane to their reinforcement-learning algorithms, or even to each other (cf. what many animals do to their rivals in the wild)


We could just use the more effective algorithms until we out-compete them, then use the more ethical ones. Of course, that assumes there are utilitarians at the helm.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2012-06-03T09:56:00

DanielLC wrote:We could just use the more effective algorithms until we out-compete them, then use the more ethical ones. Of course, that assumes there are utilitarians at the helm.

Well, that assumes there is a helm. With Darwinism, there usually isn't one. If replicators freely propagate through space without a common non-mutation algorithm, there may only ever be very local helms. And how much suffering are utilitarians willing to create to out-compete what they consider non-utilitarian rivals? If they are forced to callously play this efficiency race perpetually, then what difference does it make that they consider themselves utilitarians?
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2012-06-03T10:05:00

Alan wrote:In humans, sympathy seems to result when "intentional stance" predictive systems bleed over into "mirror neuron" motivational systems, which causes us to feel sorry for others. An AI designed from scratch could likely overcome this configuration and clearly separate the two functions of "other-mind prediction" vs. "self rewards/punishments."

Applies to humans too. As soon as humans can self-modify, involuntary empathy may be on the list of things to go. But it's not a strong prediction since it's not clear that people would like to self-modify this way, and others might trust and/or like them less. On the flip side, involuntary suffering can probably be edited out as well. It's not clear to me in what direction the utility distribution would go once mind re-design becomes feasible.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-06-03T12:02:00

Hedonic Treader wrote:On the flip side, involuntary suffering can probably be edited out as well. It's not clear to me in what direction the utility distribution would go once mind re-design becomes feasible.

Or, at least, unintentional suffering can be edited out. There will always remain the risk of intentional torture, for purposes of threats/warfare, or (hopefully less often) for sadistic entertainment. :?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2012-06-03T12:19:00

Mutually assured supertorture could work similarly to mutually assured destruction, but I think there's a crucial difference: No agent or faction can ignore its own destruction while maintaining other goals. But an agent or faction, especially after mind redesign, could credibly commit to ignore all threats that are solely based on suffering. They could commit to punish such threats, but never give in to them. And they could prove that this is not just show, but actually how their minds / decision algorithms work. Torture as a threat device works only with beings who are afraid of suffering or who otherwise value suffering negatively.

That makes for an ambiguous prediction: It will have game-theoretic value to credibly commit to not be afraid or care about suffering. This may prevent torture-as-warfare scenarios completely. But on the other hand, actually not caring about potential suffering may increase expected suffering in its own right, since it may not be factored into other decisions anymore, or at least not to the same degree.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-06-03T13:03:00

Heh, this is tricky business.

It's important to emphasize that "not responding to torture" is not necessarily the same as "not caring about torture." Wanting and liking are different things, so you could probably in principle not want to stop pain that you really dislike. We hedonistic utilitarians care about the (dis)liking, so this kind of torture is still very bad. The hope, as you say, is just that by adopting this kind of pre-commitment, there won't be incentive to do much torture in the first place.

You're also correct that it's tricky to demonstrate that you don't care about reducing suffering at the same time to actually care about reducing suffering. For agents with transparent source code, is this literally impossible? More generally, as soon as a goal-directed agent starts to care about reducing X, that agent can be manipulated by threats to produce X. (X could be suffering, paperclips, cheesecake, etc.)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2012-07-15T02:00:00

In this blog post, Carl Shulman introduces dolorium and hedonium as concepts and makes two assumptions relating them to the future. Hedonium (H) is resource use optimized for pleasant experience such as wireheading, Dolorium (D) is resource use optimized for unpleasant experience.

One assumption is that

hedonistic utilitarians could approximate the net pleasure generated in our galaxy by colonization as the expected production of hedonium, multiplied by the "hedons per joule" or "hedons per computation" of hedonium (call this H), minus the expected production of dolorium, multiplied by "dolors per joule" or "dolors per computation" (call this D).


In other words, what really matters in the future is H-D; the rest of sentient life has a comparatively small impact on the utility total because of its different optimization focus.

The other assumption is an optimistic one: Since H and D aren't constrained by fitness considerations, the current finding that bad is stronger than good in darwinian life doesn't have to apply, and we can instead assume symmetry. Furthermore, we can assume a surplus of H over D under realistic assumptions:

Even quite weak benevolence, or the personal hedonism of some agents transforming into or forking off hedonium could suffice for this purpose.


So the future would look good for hedonistic utilitarianism.

I think the assumption of symmetry because H and D aren't constrained by fitness considerations is a valid one, but it may reduce our expectation value of both H and D in any scenario in which resource use is mostly driven by darwinian algorithms. Assume a space colonization event resulting in an open evolution of cultures, technologies, space-faring technological and biological phenotypes etc. How many of them will produce either H or D? Wireheading temptations can locally generate H, game-theoretic considerations can result in D (threats of supertorture as an extortion instrument). But assuming a relatively low level of global coordination, both H and D will probably only exist in small quantities: There will be ordinary selection effects against wireheads; Darwinism favors reproduction optimizers instead.

Furthermore, the expectation values of H and D seem to be linked: In scenarios in which a high quantity of H can be expected, high quantities of D are also more probable, and vice versa. Assume a scenario in which powerful factions have explicit hedonistic goals and want to produce H. Those are exactly the kinds of scenarios in which we would see rivals credibly threatening to produce large quantities of D in order to extort resource shares for their own fitness from the hedonistic factions. Conversely, if D has no practical use because no one powerful enough will care about it, H is also much less likely because the powerful factions all care about other things than hedonism (probably just survival and reproduction of their idiosyncratic patterns).

If the expectation values of H and D are roughly linked, and open colonization and evolution cause strong selection effects against using resources on H and D, H-D may not dominate the expected utility of a big future after all.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby DanielLC on 2012-07-15T23:28:00

With Darwinism, there usually isn't one. If replicators freely propagate through space without a common non-mutation algorithm, there may only ever be very local helms.


Use a common non-mutation algorithm. Once it's sufficiently outcompeted everything else, set it to make happy beings. When a new threat appears, the utilitarian beings will be able to easily overpower it through numbers, or through a relatively small number of unhappy soldiers.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-07-16T02:58:00

Thanks, HedonicTreader! I really do like this argument, even if I'm not sure whether I agree with it. In particular, I'm not sure whether H does in fact equal D. I'm also not sure if I care more about D even if H == D.

Hedonic Treader wrote:In other words, what really matters in the future is H-D

Or, rather, H*(amount of happiness at level H) - D*(amount of suffering at level D).

Hedonic Treader wrote:bad is stronger than good

What a great paper -- thanks!

Hedonic Treader wrote:There will be ordinary selection effects against wireheads; Darwinism favors reproduction optimizers instead.

Yes. Quite sad.

Hedonic Treader wrote:Furthermore, the expectation values of H and D seem to be linked

Another obvious reason for the connection is that you need to know how to create extreme happiness/suffering, and that would take quite a bit of work to figure out.

Hedonic Treader wrote:If the expectation values of H and D are roughly linked, and open colonization and evolution cause strong selection effects against using resources on H and D, H-D may not dominate the expected utility of a big future after all.

Yes, could be. It's very hard to say, but at the same time, there aren't many questions more important than this one.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2012-07-16T07:39:00

Brian Tomasik wrote:In particular, I'm not sure whether H does in fact equal D.

I think the argument from symmetry is not a bad one. Of course, this doesn't make the hypothesis certain, just plausible. The evolved intensity asymmetry (bad is stronger than good) may have specific fitness-related functions. David Pearce kind of suggested that it may even be just an accident, that evolution could have stumbled into a different solution (gradients of bliss) and that we can shift into that through hedonic enhancement without leaving the Darwinian paradigm. (He doesn't actually express it like this, but I think it's the gist of the abolitionist project). I'm not sure how probable that is, given the apparent robustness of the asymmetry in evolutionary history. Then again, that robustness may be a sign of a local optimum, and a complete redesign could get us to a new one.

I'm also not sure if I care more about D even if H == D.

I came around to caring about both equally. I think most of the intuition that D matters more comes from our experience of the asymmetry, which would not apply to H and D by hypothesis. Another part is a feeling of injustice, or specific compassion for the worst-case perspectives, which are delocalized from the other perspectives, including the high-pleasure ones. There is no real objection to that, but I found that I still bite the bullet. I wouldn't apply a pain-avoidance premium to my own life, given equal intensities and qualities of pleasure and pain. Since my personal egoistic policy and my utilitarianism should collapse in the special case of solipsism, it would be logically inconsistent to apply a value asymmetry to utilitarianism that I would not accept for my egoism. (I would not want to waste a pleasure surplus.)

Or, rather, H*(amount of happiness at level H) - D*(amount of suffering at level D).

Yes, that's what I meant to express. Thanks for the correction.

Hedonic Treader wrote:There will be ordinary selection effects against wireheads; Darwinism favors reproduction optimizers instead.

Yes. Quite sad.

Thankfully, there will also be selection effects against suffering maximizers. We kind of take it for granted, but maladaptive sadists fare equally ill in Darwinism as wireheads. This is a huge advantage.

Another obvious reason for the connection is that you need to know how to create extreme happiness/suffering, and that would take quite a bit of work to figure out.

Yes. The knowledge needed to create H also make the feasibility of D more probable and vice versa. The big question is if there is an asymmetry in the likelihood for this knowledge to be used on H vs. D. Plausible motivations for H are obvious, plausible motivations for D may be blackmail or out-group hatred or sadism. I think that if the blackmail function were out of the picture, the expected quantity of H could be higher.

DanielLC wrote:Use a common non-mutation algorithm. Once it's sufficiently outcompeted everything else, set it to make happy beings. When a new threat appears, the utilitarian beings will be able to easily overpower it through numbers, or through a relatively small number of unhappy soldiers.

Hm, yes, even though I would suspect that the most resource-efficient forms of hedonium would not provide a strength in numbers on any useful metric.

I think your suggested strategy could make sense in a race for a stable singleton that will dominate the local universe forever, or if there are local sub-clusters to be conquered, bought or colonized, and if these clusters can be defended more easily than conquered once in possession. Such clusters could be physical (e.g. star systems that can be flung into isolation or otherwise guarded efficiently). Or virtual, if there is some kind of superstructure that rigidly enforces property rights or resource claims through agreements or legal rule systems no one has sufficient incentive or power to break.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-07-16T14:12:00

I'm continually impressed by your insights on these matters, Hedonic Treader. Thanks for the great discussion!

Hedonic Treader wrote:The evolved intensity asymmetry (bad is stronger than good) may have specific fitness-related functions.

Yes, that seems quite possible, and symmetry is somewhat compelling theoretically. However, the fact that we have one data point on the negative side makes our posterior probabilities slightly asymmetric: P(D > H) > P(H > D), even if it's not a big difference.

Hedonic Treader wrote:I came around to caring about both equally. I think most of the intuition that D matters more comes from our experience of the asymmetry, which would not apply to H and D by hypothesis. Another part is a feeling of injustice, or specific compassion for the worst-case perspectives, which are delocalized from the other perspectives, including the high-pleasure ones. There is no real objection to that, but I found that I still bite the bullet.

I know what you mean. I'm inclined to bite the bullet some of the time, but at other times I refuse. It can depend quite a bit on my mood at the time. :)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2012-07-17T05:12:00

Brian Tomasik wrote:I'm continually impressed by your insights on these matters, Hedonic Treader. Thanks for the great discussion!

Thanks! I find my thoughts circle back to this topic repeatedly because of the high stakes involved.

However, the fact that we have one data point on the negative side makes our posterior probabilities slightly asymmetric: P(D > H) > P(H > D), even if it's not a big difference.

Yes, unfortunately this seems to be the case.

I know what you mean. I'm inclined to bite the bullet some of the time, but at other times I refuse. It can depend quite a bit on my mood at the time. :)

I know the phenomenon quite well. Rationally speaking, such value judgments shouldn't shift with, say, current blood sugar levels, but they often do. There's evidence that this even impacts judges so that the length of prison sentences or probability of probation correlate with it. The problem with changing values often is that we end up playing games against our own past and future selves, which is inefficient.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby DanielLC on 2012-07-17T19:50:00

I'm not sure bad is stronger than good. I think good things happen more often, but bad things are more intense. I suspect that it adds to around zero, although even if that's the case, since the pain of death is bad and has no long-term psychological effects (for the simple reason that you won't be around to have them), it would still come out net bad.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-07-19T10:35:00

DanielLC wrote:I'm not sure bad is stronger than good. I think good things happen more often, but bad things are more intense.

Right, this may be true in terms of overall hedonics for organisms in the world, but the question we're asking here is what is the maximal possible intensity per unit time. In artificial hedonium/dolorium, it's this intensity that will be simulated nonstop.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby DanielLC on 2012-07-19T18:57:00

I don't think we're anywhere near the maximal intensity. We feel what we feel as intense as we do because it's the intensity that maximizes genetic fitness. I guess there's slight evidence that it's easier to make something suffer, but that would be countered by a slightly higher probability of creating hedonium.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: A few dystopic future scenarios

Postby Pablo Stafforini on 2012-08-19T21:38:00

Hedonic Treader wrote:David Pearce kind of suggested that it may even be just an accident, that evolution could have stumbled into a different solution (gradients of bliss) and that we can shift into that through hedonic enhancement without leaving the Darwinian paradigm. (He doesn't actually express it like this, but I think it's the gist of the abolitionist project). I'm not sure how probable that is, given the apparent robustness of the asymmetry in evolutionary history. Then again, that robustness may be a sign of a local optimum, and a complete redesign could get us to a new one.

I have thought that "the wisdom of nature" heuristic might provide an objection to the abolitionist project. If creatures could be motivated by gradients of bliss rather than by states involving both pain and pleasure, why haven't such creatures evolved naturally? Your suggestion that it was just an accident might provide an adequate response to the objection. The hypothesis seems hard to test, though, since there are no relevant examples of convergent evolution; there is just one data point.
"‘Méchanique Sociale’ may one day take her place along with ‘Mécanique Celeste’, throned each upon the double-sided height of one maximum principle, the supreme pinnacle of moral as of physical science." -- Francis Ysidro Edgeworth
User avatar
Pablo Stafforini
 
Posts: 177
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2012-08-20T14:52:00

There is one other problem that became more salient to me recently, namely that the definition of blackmail may actually be quite fuzzy. There may be a thin line between blackmail and negotiation offers to mutual benefit.

Let's say someone with alien values can wipe you out to gain resources, you can retaliate and cause them some cost in return, but not as much as they gain. From their perspective, sparing you would be irrational generosity unless you agree to make up for the difference between their potential gain and the cost you can cause them, e.g. in resource tribute, future cooperation etc. But from an anti-blackmail perspective, who's blackmailing whom, and who makes legitimate offers to whom? Are they blackmailing you in threatening your destruction? They could just do it and take your resources. Are you blackmailing them by pointing out their cost if you retaliate?

Blackmail may even be unavoidable in a civilized context: Is the police blackmailing me by threatening to arrest me if I steal purses? Should I commit to ignore that threat?

It may be hard to draw those lines reliably, especially because they have to work as Schelling points for others. If they think that you think that they perceive the lines differently than you, they may predict you to give in to blackmail anyway. It seems Pablo is correct that this can escalate.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby DanielLC on 2012-08-20T17:20:00

There is one other problem that became more salient to me recently, namely that the definition of blackmail may actually be quite fuzzy.


Any situation in which having the ability to do something hurts you. That's actually a much more general thing, and even covers carrying out blackmail threats.

Basically, just use TDT (I think).

If you're currently using CDT or EDT and can change it, you'd actually end up with something different. With CDT, you'd take the action that, given what was known when you changed your decision theory, and that you'd make this decision, maximizes expected utility. EDT could be somewhat different, because now other people might make decision theories that correlate with yours, but it's the same idea.

This can escalate, but it can't be likely. After all, if people knew it was likely, they wouldn't give it the opportunity to, so it wouldn't be likely.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: A few dystopic future scenarios

Postby CarlShulman on 2012-12-07T20:37:00

" No, actually, conditional on humans surviving, the most likely scenario is that we will be outcompeted by Darwinian forces beyond our control."

Brian,

You have big unargued-for assumptions here, leaving out inhuman singletons, for instance. Nick Bostrom, the author of the piece you linked, does not consider the scenarios included in his paper to take up most of the relevant probability mass.

Your estimate of <5% chance of human control of the future is at the extreme left tail of people who have considered the topic, e.g. lower than people at the Future of Humanity Institute or Singularity Institute would say, or surveys of risk experts, AI experts, and surveys of attendees at conferences with AI risk-related talks and workshops.

http://www.philosophy.ox.ac.uk/__data/a ... report.pdf
Gives an AI extinction risk of 5% by 2100 (higher than 'conventional' risks, and highest as a portion of the associated catastrophic risk, but still not massive)

(Also Google "Machine Intelligence Survey," the pdf on the FHI site is down, but you can Quick View the report from Google; this group was more selected specifically for interest in AI risk, but still a majority thought human-level AGI would be very good to neutral/mixed in its impact)

And you leave out opportunities for happiness to be created by civilizations out of our control (which in expectation I think exceed the pain) perhaps because of your negative-leaning utilitarian perspective, which counts the bads but largely ignores goods.:

Scientific simulations concerned with characteristics of intelligent life and civilizations would disproportionately focus on intelligent life, and influential intelligent life at that, with a higher standard of welfare.

Humans have used our increased power for extensive 'wireheading': foods prepared for taste, Hollywood, sex with contraception, drugs, pornography, art, video games. Eurisko wireheaded. Some wireheading AIs would have morally valuable states: certainly this possibility is linked to the possibility of suffering subroutines.

And of course suffering subroutines must be contrasted with reward subroutines.

Baby universes (a pretty remote possibility) require talking about measure in an infinite multiverse, which is a bit tricky for this context, but basically given the existence of infinite baby universes there are infinite instances of suffering and happiness no matter what, all one might do is affect measure. And while there is no satisfactory 'preferred' measure for physicists and cosmologists, on various accounts creating baby universes would leave the measure unchanged, or could be more than offset by other actions affecting measure. And in any case if we start considering unlikely possibilities of generating infinite quantities of stuff, then the expected production of sapience/consciousness by intelligent beings goes to infinity.

Also, if we use the Self-Indication Assumption, or the Pascalian total utilitarian equivalent (with one-boxing decision theory) then our attention is focused on worlds in which civilizations are just frequent enough that they can colonize a very large fraction of the universe. Then most of our impact will be in states of affairs where in fact much or most of the animal life is reachable by sapients, and if the sapients convert galactic accessible resources into happy sapient life many orders of magnitude more efficiently than nonsapient life produces net suffering, then the expectation for the universe as a whole is positive (unreachable wild animals as a rounding error).

See the discussion of the SIA in this paper: http://www.nickbostrom.com/aievolution.pdf

Savage ideologies must be offset against both altruistic and selfish ideologies: creating large amounts of happiness for the good is also an important part of many ideologies. And loyalty to one's group or oneself favors providing more happiness to the above. Resources could be used to increase the longevity, population, and happiness of the in-group, or hurt the outgroup (and with advanced technology the ability to convert resources into life will be greater than in historical times), and the latter is less attractive. Paying large costs to attack others is much less attractive than attacking to steal.

"Torture as warfare" is offset by "heaven in peace". As you discuss in comments, it is a wise and common disposition (for evolutionary, sociocultural, and other reasons) to resent and resist extortion, but to accept mutually beneficial deals. Such trade is positive-sum, while warfare is negative-sum. Such factors would bias inhuman civilizations to producing more goods than bads in interactions with any aliens or other groups concerned with happiness and suffering.

The spread of wild animal life to other stars is offset by the prospect of immensely greater population density in advanced technological civilization. The powerful tend to see to their own pleasure even at the expense of the helpless, but if it is possible to convert the resources sustaining helpless animals into vast numbers of powerful sapient beings, that would tend to happen. See also Robin Hanson's post, "Nature is Doomed":

http://www.overcomingbias.com/2009/09/n ... oomed.html

CarlShulman
 
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-12-08T03:56:00

Thanks, Carl! To what sorts of scenarios would you assign large probability mass? What kinds of values might inhuman singletons have? Are those different from paperclippers?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby CarlShulman on 2012-12-08T15:01:00

I have updated my comment above to include more detail on some omissions and dubious points bearing on the OP suggestion that the expected net value of the future is negative for non-negative utilitarians who think happiness matters, which is false by my best estimate.

CarlShulman
 
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-12-08T19:12:00

Thank you, Carl. Your reply is excellent.

Carl's points

Carl and I discussed these issues privately as well to clarify some of his points. I'll do my best here to explain what he said as I walk through his arguments.

How likely that human values control the future?

Carl asked for my probability that, conditional on humans not going extinct by other means (nanotech, biowarfare, etc.), the future would be shaped by human values rather than something that outcompetes us. I said <5%, because keeping intelligences under your control seems really hard. Not only might it require a singleton, but it also requires that the humans controlling the singleton know what they're doing rather than creating paperclippers, which seem to be the default outcome unless you're really careful. As Eliezer has said, even one mistake in the chain of steps needed to get things right would spell failure for human values.

Carl suggested that there are other outside-the-box scenarios for maintaining control of the future that I haven't considered. And as his reply notes, my probability is lower than that of anyone at FHI or SIAI, which means I should revise it upward.

How bad would UFAIs be?

I use the term "unfriendly AIs" (UFAIs) to denote AIs that are not controlled by human values. The terminology doesn't imply that they'll necessarily do worse things than human-controlled AIs would -- indeed, they might actually cause less total suffering.

I had presumed that even non-negative-leaning utilitarians might agree that UFAIs would be a net bad outcome. I figured UFAIs wouldn't have much incentive to produce good things (e.g., hedonium), and at the same time, they might do things that would be net bad, like running sentient simulations of nature in order to learn about science.

Carl pointed out that sentient simulations would focus on the more intelligent minds that would be more likely to be net happy. This is true, but you need only simulate a few ant colonies to outweigh a whole bunch of simulations of happy rich people. So I'm doubtful that even for a regular utilitarian, sentient simulations would be positive.

That said, Carl makes other good points: If we got a wirehead AI of the right type (i.e., one where its wireheading was actually pleasure rather than just computations I don't care about), that would be a good thing. And yes, there might also be reward subroutines. Suffering predominates over happiness for wild animals because (a) most wild animals die shortly after birth, and (b) suffering in general is more intensely bad than pleasure is good. I hope (a) wouldn't apply to subroutines, and in any event, death wouldn't be painful for them. Maybe (b) wouldn't apply either, because the cognitive algorithms for pleasure and pain might be symmetric? Or is there something fundamental about the algorithm for suffering that makes it inherently more bad than pleasure is good? As I noted before, "P(D > H) > P(H > D), even if it's not a big difference."

Baby universes

"[Carl:] basically given the existence of infinite baby universes there are infinite instances of suffering and happiness no matter what, all one might do is affect measure"

I don't know how to deal with infinite ethics, but I think my preferred ethical approach would say that it is bad to create new universes even if there are already infinitely many of them and even if doing so doesn't change the relative balance of happiness vs. suffering that they contain.

Are most wild animals reachable?

Carl makes an interesting argument, which might be illustrated as follows.

Hypothesis 1: There isn't much life in the universe.
Hypothesis 2: There are lots of wild animals, but few minds like my own that can shape technology and undertake cosmic rescue missions.
Hypothesis 3: There are lots of wild animals but also lots of minds like my own that can shape technology and undertake cosmic rescue missions.

By the Self-Indication Assumption (SIA), Hypothesis 3 is orders of magnitude more favored compared with Hypothesis 2, because under Hypothesis 3, there are orders of magnitude more copies of Brian. So even if Hypothesis 3 was disfavored on other grounds (our knowledge of cosmology, astrobiology, etc.), Hypothesis 3 still wins out in the end, as long as the number of extra copies of Brian in Hypothesis 3 is more than the prior-probability advantage of Hypothesis 2 over Hypothesis 3.

Even if you don't buy SIA, the claim is that the same preference for Hypothesis 3 over 2 can come from one-boxing total utilitarianism. I think the argument is something like, "If we are in Hypothesis 3, then we can do way more total good in the universe than if we're in Hypothesis 2, so we should act as though we're in Hypothesis 3." This is true as far as things like undertaking cosmic rescue missions -- if there's even a small probability we (and our copies in other parts of spacetime) can help vast numbers of wild animals, the expected value may be high enough to justify it. However, this same kind of reasoning doesn't apply when we're talking about whether, say, we want to create new universes like ours. In that case, our probabilities should follow what we actually think is the case, rather than the way we act on the off chance that it'll have high payoff. I may misunderstand here, so I welcome being corrected.

Savage vs. nice ideologies

Carl makes good points here, especially as far as the observation that selfishness/stealing are more desirable than costly punishment.

Ways forward, revisited

Most of Carl's points don't affect the way negative utilitarians or negative-leaning utilitarians view the issue. I'm personally a negative-leaning utilitarian, which means I have a high exchange rate between pain and pleasure. It would take thousands of years of happy life to convince me to agree to 1 minute of burning at the stake. But the future will not be this asymmetric. Even if the expected amount of pleasure in the future exceeds the expected amount of suffering, the two quantities will be pretty close, probably within a few orders of magnitude of each other. I'm not suggesting the actual amounts of pleasure and suffering will necessarily be within a few orders of magnitude but that, given what we know now, the expected values probably are. It could easily be the case that there's way more suffering than pleasure in the future.

If you don't mind burning at the stake as much as I do, then your prospects for the future will be somewhat more sanguine on account of Carl's comments. But even if the future is net positive in expectation for these kinds of utilitarians (and I'm not sure that it is, but my probability has increased in light of Carl's reply), it may still be better to work on shaping the future rather than increasing the likelihood that there is a future. Targeted interventions to change society in ways that will lead to better policies and values could be more cost-effective than increasing the odds of a future-of-some-sort that might be good but might be bad.

As for negative-leaning utilitarians, our only option is to shape the future, so that's what I'm going to continue doing.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby CarlShulman on 2012-12-08T19:27:00

Brian, you should also modify the OP to take this exchange into account.

CarlShulman
 
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-12-08T19:40:00

CarlShulman wrote:Brian, you should also modify the OP to take this exchange into account.

Haha, yes, I was planning to do that but forgot. :)

I don't know how best to incorporate the discussion without making things messy, but maybe the best approach is to copy my "Ways forward, revisited" section into the front to make sure people see it?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-12-08T20:09:00

Pablo Stafforini wrote:
Brian Tomasik wrote:I agree that committing not to be blackmailed is an excellent strategy.

I disagree. If you commit yourself not to be blackmailed, you might end up being super-tortured by someone who committed himself to carry out all his blackmail threats. For every threat-ignorer, there is a potential threat-fulfiller.

Hmm, interesting. That said, if you're really serious about not caving in to threats, then you still won't cave in to this threat-ignorer either. In that case, his stance won't do any good, and he'll waste a lot of resources on threats that don't do anything.

It does sound like a tricky issue, though. I'd love to learn more about what scholars think about it.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby CarlShulman on 2012-12-08T21:01:00

"If the expectation values of H and D are roughly linked, and open colonization and evolution cause strong selection effects against using resources on H and D, H-D may not dominate the expected utility of a big future after all."

Hedonic Treader,

For a given expected quantity of H, a given expected quantity of D, and a utility function that values them additively, the distribution of H and D across futures can't affect expected utility.

Maybe you meant to say that differences in H-D will contribute less to differences in actual realized utility across scenarios. But unless they are very close to equal the penalty will not be very large relative to the orders of magnitude of efficiency gains. If you have a ratio of 3H:1D, then the H-D value is 2, whereas the difference in H-D value between a world with 3H and a world with 1 D is 4, twice the net of the mixed world. With a H:D ratio of 1.5:1, the net H-D would be 0.5, vs a gap of 2.5, five times as great. And we would not expect H and D to be perfectly correlated.

CarlShulman
 
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: A few dystopic future scenarios

Postby mwaser on 2012-12-09T22:35:00

I directly address a lot of these issues in my recent article at Transhumanity.net at http://transhumanity.net/articles/entry ... telligence

mwaser
 
Posts: 1
Joined: Sun Dec 09, 2012 10:32 pm

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2012-12-10T16:33:00

Brian wrote:It would take millions of years of happy life to convince me to agree to 1 minute of burning at the stake.

Do you hold a corresponding belief of the following sort? "There are neurons encoding pain intensity in such a way that the encoded intensity when burning at the stake is literally 5 x 10^10 higher than the average encoded pleasure intensity of happy life"?

In other words, do you expect to find neurons (coding pain intensity) that fire 10 orders of magnitude more frequently than corresponding neurons (coding pleasure intensity)?
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-12-10T18:42:00

Hedonic Treader wrote:In other words, do you expect to find neurons (coding pain intensity) that fire 10 orders of magnitude more frequently than corresponding neurons (coding pleasure intensity)?

No, definitely not. However, I believe our degree of concern about something doesn't have to track the literal number of neurons of a component part.

From a conversation with Ruairi yesterday:
Ruairi: I had thought maybe there was some way to decide my exchange rate such as "1gram of neurotransmitter X released at point A is as good as 1 gram of neurotransmitter Y realeased at point B is bad". But thinking about it more now there doesn't seem to be any reason why a measure like this is any more objective than just deciding oneself

Brian: As far as 1 g neurotransmitter vs. 1 g another neurotransmitter, that could give us some grounding for comparison, but there's a lot more going on. For example, the subjective goodness/badness of stuff involves a lot of manipulation by the conscious brain, activity in the ventral pallidum for pleasure, evaluation combined with raw experience, etc.

In other words, the relevant things are high-dimensional and potentially qualitative. For example, when you feel the same pain, it can seem a lot worse or not depending on whether you know it's causing tissue damage, etc. There's a classic study about people walking across a bridge and mistaking fear for romantic attraction. The same chemicals can feel very different depending on context. There's also longer-term evaluation of an experience (how bad was that?) which involves conscious reflection, etc.

Anyway, point is just that there's a lot of messiness to consider. I think quantitative stuff is relevant and can shape our intuitions, but it will take a while for neuroscience to refine our understanding of what's going on that we care about.

Ruairi: Hm, but even if neuroscience does come up with something, there's no reason we should care what really is there? But maybe I just do care.

Brian: Yes, even if neuroscience comes up with things, we don't have to care. However, we might choose to care because it will change our intuitions. For example, if you didn't know that animals were physiologically similar to humans, you might not care about them at first. You could still not care about them after learning the similarities, but the similarities generally change your intuitions.


Discussing exchange rates with Peter on Facebook, I suggested I might be willing to relax my exchange rate to 1 minute of torture for maybe thousands/hundreds of years of happy life, since I'm undoubtedly biased by scope insensitivity. My exact feelings on this change depending on my mood.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2012-12-10T20:08:00

Brian Tomasik wrote:Discussing exchange rates with Peter on Facebook, I suggested I might be willing to relax my exchange rate to 1 minute of torture for maybe thousands/hundreds of years of happy life, since I'm undoubtedly biased by scope insensitivity. My exact feelings on this change depending on my mood.

Yes, it's the same for me. On the one hand, imagining something like burning alive for one minutes creates very strong aversion. On the other hand, I've probably had more than one minute of total agony if I were to aggregate seconds of pain that happened independently, but with high intensity, in my actual past. And I don't feel that this, in and by itself, negates the value of my life so far (the other day-to-day unpleasantness does a lot of that though). And I have an intuition that these judgments should correspond, i.e. that it shouldn't matter whether these seconds were in succession or not. In addition, I also have the intuition that if neuroscience came up with a way to literally measure pleasantness/unpleasantness intensity encodings in the brain, that would probably increase my disposition to accept them more directly, i.e. with less of the meta-valuation. When Kahneman talks about the experiencing self vs. the remembering self (and I would add an anticipating or imagining self for things like burning at the stake), I mostly come down on caring about the experiencing self, not so much about the other selves.

Carl wrote:For a given expected quantity of H, a given expected quantity of D, and a utility function that values them additively, the distribution of H and D across futures can't affect expected utility.

Yes, and that wasn't the claim I was trying to make.

Maybe you meant to say that differences in H-D will contribute less to differences in actual realized utility across scenarios. But unless they are very close to equal the penalty will not be very large relative to the orders of magnitude of efficiency gains. If you have a ratio of 3H:1D, then the H-D value is 2, whereas the difference in H-D value between a world with 3H and a world with 1 D is 4, twice the net of the mixed world. With a H:D ratio of 1.5:1, the net H-D would be 0.5, vs a gap of 2.5, five times as great. And we would not expect H and D to be perfectly correlated.

You're correct, if the expected quantity of H and D are different enough despite sharing common causes (e.g. the existence of powerful enough factions who explicitly care about hedonistic utility), and if these quantities are big enough compared to the rest of the utility distribution, their efficiency gains can dominate the calculus. However, I think the conditions for both H and D share common elements, and are both quite narrow in comparison to the conditions of the existence of a general landscape of hedonistic utility (e.g. evolving/colonizing sentience not deliberately optimized for intensity/duration of pleasantness/unpleasantness). It's not clear to me H-D dominates the big picture.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Pablo Stafforini on 2012-12-10T22:44:00

Brian Tomasik wrote:Hmm, interesting. That said, if you're really serious about not caving in to threats, then you still won't cave in to this threat-ignorer either. In that case, his stance won't do any good, and he'll waste a lot of resources on threats that don't do anything.


From a decision-theoretic perspective, I think the situation is symmetrical. You could equally say that if you are really serious about issuing threats, you would fulfill even threats issued to threat-ignorers. In that case, the threat-ignorer's stance won't do any good, and he'll suffer a lot of pain needlessly.
"‘Méchanique Sociale’ may one day take her place along with ‘Mécanique Celeste’, throned each upon the double-sided height of one maximum principle, the supreme pinnacle of moral as of physical science." -- Francis Ysidro Edgeworth
User avatar
Pablo Stafforini
 
Posts: 177
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-12-11T03:09:00

Hedonic Treader wrote:I've probably had more than one minute of total agony if I were to aggregate seconds of pain that happened independently, but with high intensity, in my actual past.

I think I have not. I've experienced plenty of severe pain, but I don't think the sum total of it equals one minute of burning at the stake. My perceived badness of pain is extremely nonlinear with respect to "objective" measures of intensity. For example, I'm the kind of person who would rather have nausea and stomach pain for 2.5 hours to avert vomiting rather than throw up in 30 seconds and get it over with from the beginning.

Hedonic Treader wrote:if neuroscience came up with a way to literally measure pleasantness/unpleasantness intensity encodings in the brain, that would probably increase my disposition to accept them more directly, i.e. with less of the meta-valuation.

Haha, sure, but the tadpoles being eaten alive right now don't have this neuroscience perspective with which to allay their meta-valuations. :)

Hedonic Treader wrote:I mostly come down on caring about the experiencing self, not so much about the other selves.

Me too.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2012-12-11T10:03:00

Brian Tomasik wrote:I think I have not. I've experienced plenty of severe pain, but I don't think the sum total of it equals one minute of burning at the stake. My perceived badness of pain is extremely nonlinear with respect to "objective" measures of intensity. For example, I'm the kind of person who would rather have nausea and stomach pain for 2.5 hours to avert vomiting rather than throw up in 30 seconds and get it over with from the beginning.

Maybe you're right and pain intensity is really very non-linear on the extreme end. I've had my encounters with scalding hot water and so on, but maybe you can't sum it up to equal one minute of burning at the stake. However, neither of us has ever burned at the stake (I hope), and the converse is also quite plausible: Maybe once the pain becomes constant, adrenalin, shock or psychological mechanisms set in and the total experience becomes a blur. How would we know this without having had the experience? And even if we had, memories might not be accurate.

For what it's worth, my memory says that throwing up is less bad than it seems when I anticipate it during nausea.

EDIT: One more thought: If we really care disproportionally about the extremes of pain and suffering, the very lowest hanging fruit would be hedonic enhancement that takes the edge off of those extremes. A biotech intervention that reduces peak agony intensity by 30%, say, should then yield practically world-shifing utility increases ceteris paribus. That can't be too hard, compared to other strategies.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Ruairi on 2012-12-11T17:00:00

Brian Tomasik wrote:My perceived badness of pain is extremely nonlinear with respect to "objective" measures of intensity. For example, I'm the kind of person who would rather have nausea and stomach pain for 2.5 hours to avert vomiting rather than throw up in 30 seconds and get it over with from the beginning.


This doesn't effect your values tho right? Just what you happen to enjoy less than other things?
User avatar
Ruairi
 
Posts: 392
Joined: Tue May 10, 2011 12:39 pm
Location: Ireland

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-12-11T20:27:00

Hedonic Treader wrote:Maybe once the pain becomes constant, adrenalin, shock or psychological mechanisms set in and the total experience becomes a blur.

Could be. We can hope. If we assigned 50% probability that it's not as bad as it seems, the expected pain would be reduced by almost 1/2.

Hedonic Treader wrote:For what it's worth, my memory says that throwing up is less bad than it seems when I anticipate it during nausea.

I'm not sure. Throwing up is pretty bad, but it's possible the anticipation exaggerates. Even if I could choose now from the cold-headedness of my armchair which I would prefer, I think I'd still go for the 2.5 hours of agony. OTOH, I haven't vomited since ~1999, so it's possible my brain has built up the illusion that it would be worse than it is.

Hedonic Treader wrote:A biotech intervention that reduces peak agony intensity by 30%, say, should then yield practically world-shifing utility increases ceteris paribus.

Could be! But how are you going to install those into billions of one-day-old minnows being eaten? At this point it's just more feasible to reduce populations of short-lived, r-selected animals.

Ruairi wrote:This doesn't effect your values tho right?

Well, it's a case study to suggest that maybe other people don't realize how bad severe pain is relative to how bad I think it is. Correspondingly, my pain-pleasure exchange rate will tend to be more lopsided than theirs. There are (at least) two possibilities here, both of which could be partly true:
  1. My memories about the severity of the bad stuff are different from theirs. Theirs might be wrong, or mine might be wrong, but either way we should move our estimates toward each other.
  2. Due to individual differences, my experience of the badness of pain is actually worse than theirs for the same kinds of experiences. In this case, we would still move our exchange rates in each other's directions in the sense that, averaged over the population, some people will have higher exchange rates and some will have lower exchange rates. Those who thought pain wasn't so bad will realize that some people think it actually is. I who thought pain was really bad will realize that some people think it's not.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2012-12-11T23:33:00

Brian Tomasik wrote:
Hedonic Treader wrote:A biotech intervention that reduces peak agony intensity by 30%, say, should then yield practically world-shifing utility increases ceteris paribus.

Could be! But how are you going to install those into billions of one-day-old minnows being eaten?

Cyborgification by self-replicating nanites, of course! :D

Seriously, I agree there is no good way to prevent the suffering of trillions of wild fish at this point. Killing everything off isn't very popular or feasible either even though human industries/agriculture do some of it for profit. Enacting some artificial selection pressure on wild animal populations to increase welfare traits like lower peak pain intensity is just one option for the future, as is the "welfare state" option and the extinction option.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2012-12-13T12:07:00

Brian Tomasik wrote:First, a follow-up on the discussion about threats. The best strategy of all, where possible, is simply to not even become aware of the mugging in the first place. If someone sends you a ransom note, commit to not reading it.

This doesn't work:

  • They can send you threats in formats you weren't expecting
  • They can send you threats in a format that makes you aware they don't know whether you received it or not
  • They can commit to ignore your ignoring

Another strategy would be to commit to not caring about intentionally created suffering only (i.e. focus on creating pleasure, while making your own instruments cruelty free if possible).
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2012-12-15T10:29:00

Hedonic Treader wrote:They can send you threats in formats you weren't expecting

It's not a foolproof strategy, but if you start decoding something suspicious, you can stop. You can also commit to not responding even if you accidentally read it. It's just that not reading it helps.

Hedonic Treader wrote:They can send you threats in a format that makes you aware they don't know whether you received it or not

Yeah, this is like a situation of "he who ties his hands last ties his hands best" or whatever the expression should be. Even if the threatener doesn't know if you read it, you should still not read it. If enough of those he threatens don't respond, he'll hopefully go bankrupt and run out of threatening resources.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Ruairi on 2013-01-03T01:22:00

Today I had a few concerns, apologies for the writing style, this is how I write when I'm talking to myself:

If the technology to make artificial sentients of any kind becomes (easily) available in the future we could simply persuade people to do like “SETI at home” and run tonnes of happy sentients.

What if it's not easily available?

It doesn’t seem too hard that we could convince people to run something like this and would likely exceed any suffering simulations? Also they might be run on gradients of bliss instead of suffering.

Or would this work? We can’t do the same thing with the meat industry. Whats the economic cost of simulating a mind?

So the real danger is spreading wild life to other planets? How likely is this?

Do we really know what’s likely at all? Or is it better to improve values because were so unsure what future technology will be but it seems that: power + current values = bad?

Suffering sentients also maybe unlikely because they could work on gradients of bliss too. However given current situation of nature suffering seems to work better in evolution? If these sentients will dominate calculations not too hard to push for welfare there? Ask to use gradients of bliss? Not very "out there" either really.

Perhaps we should be pushing for simulations to be made if it would be easy for their welfare to be made high?

"Happy at Home"
User avatar
Ruairi
 
Posts: 392
Joined: Tue May 10, 2011 12:39 pm
Location: Ireland

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2013-01-03T03:54:00

I didn't answer all of Ruairi's questions here, but following are a few thoughts.

There are two questions being discussed on this thread:
(#1) Is it good to reduce extinction risk?
(#2) What's the best thing we can do to improve possible future outcomes?

The "Happy at Home" scenario (i.e., utilitronium) doesn't necessarily tip the balance on question #1 because it seems likely to me that in most future scenarios it won't be human values that control the outcome; something else that outcompetes us will take over instead. Even if it is human values, it's probably not going to be our values -- seeing how competitive these things are -- so we shouldn't plan on the future being the way we would like it to be. Almost certainly it won't be.

For #2, there's the question of whether promoting utilitronium would be more important than preventing the spread of wild-animal suffering, sentient simulations, suffering subroutines, torture, etc. This is less obvious.

For me personally, my conscience says I should reduce suffering rather than increasing pleasure, because suffering is just morally urgent in a way that forgone happiness isn't. David Pearce takes a similar view. Given this, I'm going to focus on preventing the spread of suffering.

If you feel equal urgency toward both sides, then it's a harder question to answer. Preventing the spread of wild animals and sentient simulations are pretty esoteric ideas, such that there are high potential returns from propagating concern about the issue. But utilitronium is also a pretty esoteric value, and it's sad that not more people support it (outside of Felicifia, where the idea is better received). Making hedonistic utilitronium (intense hedonic pleasure by huge numbers of tiny minds) more mainstream could have high leverage as a cause. I wouldn't support it myself, but I won't say it's a bad idea if you don't feel greater urgency toward suffering reduction.

Another angle would be, as you suggest, to try to turn suffering subroutines into gradient-of-bliss subroutines. This would be parallel to efforts to make factory farms more humane. It would be hard because of economic pressures, but it's possible it could be a point of leverage for post-humans to work on. Maybe right now the best thing would be to raise concern about suffering subroutines in general, since we don't currently have "factory farms of suffering subroutines" where we can lobby for changes.

The question of why nature doesn't use gradients of bliss is worth asking. I don't know if it's by accident or because gradients of bliss are somehow harder. If the latter is true, then there might be economic cost to using gradients of bliss.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2013-01-03T15:37:00

In a chat with Peter Hurford on 24 Dec 2012, I made these comments, relevant to what I said in the previous post:
For me, the main problem is that not everyone shares my values. Even ignoring pain-pleasure exchange rates, most people:
1. aren't hedonistic utilitronium-favoring utilitarians
2. don't care about insects near equally with big animals [...].

Basically, for me, even if reducing extinction were good (and I think it's not), there could be higher leverage in letting other people do it and trying to make the result better.

The difference between a future with and without utilitronium could be like [100x] the difference between survival vs. extinction, even if extinction were bad.

That's an example of how values can dominate the calculations vs. just survival or not. In general, if you want to maximize a specific thing X, it's probably better to focus on X than to focus on survival per se.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Ruairi on 2013-01-03T23:10:00

Awesome thank you!:D!

I was thinking that:

If simulated minds will be cheap and easily available then it might not be too hard to encourage *utilitorinium at home*. (But still maybe we should promote concern for them).

If they will be expensive and easily available then there probably won't be many of them and it won't be a big deal.

If they will be cheap and not easily available (due to laws, etc?) then there probably won't be many of them and it won't be a big deal.

If they will be expensive and not easily available then there probably won't be many of them and it won't be a big deal.

So maybe sentient simulations aren't such a huge issue? Or maybe they are, but we should be promoting utilitorinium style scenarios rather than warning against suffering simulations?

Another danger might be that the simulations might have a high chance of being sentient but people might not believe it. Especially if it means they'll get to avoid ethical responsibility. (Empathy avoidance?).

Anyway I think the future looks sufficiently bad that the best things to do right now and probably for a while is to make new activists in the areas of utilitarianism, antispeciesism, reducing wild animal suffering (RWAS), not spreading sentients to other planets, raising concern for sentient simulations.

I guess which thing to pick should be chosen by how good a new activist is and how hard it is to make a new activist. For example a new utilitarian is the gold standard, a new RWAS/antispeciesism activist is maybe 70-100% as good depending on how good they are, but they're probably a lot easier to find/inspire in a new person.
But, for example, if we had a high certainty that sentient simulations were going to dominate future calculations then we should maybe do that.

What futures have a lot of power to make lots of sentients? How likely are they? Is the most likely future something we haven't thought of?

What are results of researching future utility? Makes util movement better? Doesn't actually yield useful results because it's too hard a question? Depends on peoples values too what they'll think of different outcomes.

If the technology for helping wild animals happens large scale is made will the technology for making sentient simulations or spreading life to other planets also be made, (because similar things need to be invented?). What does this mean regarding what we should do?

What's the percentage chance the major source of power in the next 100 years is something we haven't predicted? And will that power leading to the control of future major (AI?) power?

Sorry to brain-vomit again. I don't wanna suffer from status quo bias and stick with RWAS because I like RWAS activism :)
User avatar
Ruairi
 
Posts: 392
Joined: Tue May 10, 2011 12:39 pm
Location: Ireland

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2013-01-04T00:30:00

It makes sense to distinguish sentient simulations created as a means to an end and ones created as an end.

Those created as a means could serve entertainment purposes (like today's NPCs), scientific experiment, simulating enemies to gain strategic insight, and so on. This could be quite large-scale and nasty, but it's not clear to what degree.

Those created as an end would probably be the object of intentional benevolence - after all, someone needs to spend resources on them and they serve no other function. They may have citizenship status, be seen as happy pets, or maybe copies of existing people or some such. If something like current human values persist, it would probably be publicly favored if they are happy and disfavored if there is cruelty, maybe with some exceptions (e.g. punishment of badly behaving ones). If such intentional benevolence is non-trivial, some altruists may be swayed to optimize the happiness deliberately. With this kind of technology, simulating (attractive) happy beings who enjoy existing would be a probable form of charity.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2013-01-04T00:48:00

Brian wrote:That's an example of how values can dominate the calculations vs. just survival or not. In general, if you want to maximize a specific thing X, it's probably better to focus on X than to focus on survival per se.

Yes, but it's not clear to what degree this still applies if you want to minimize a specific thing (e.g. agony). It also depends on the relative probability shifts you think you can create for survival vs. value change. Maybe it is a lot harder to affect values than to affect survival probability, or vice versa.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am


Re: A few dystopic future scenarios

Postby Brian Tomasik on 2013-01-26T12:10:00

Let me first discuss a topic that Ruairi has raised elsewhere: Should we promote utilitarianism-in-general rather than working on reducing wild-animal suffering (WAS), preventing suffering subroutines and sentient simulations (SSSS), etc.?

A few thoughts. I'm wary of promoting general utilitarianism rather than negative-leaning utilitarianism because general utilitarians sometimes use a crazily optimistic pain:pleasure exchange rate that IMHO doesn't realize how serious extreme suffering is. This leads people to think that reducing extinction risk might be a good thing, when in fact it probably isn't. So promoting regular utilitarianism without focusing on suffering could have bad side-effects.

Okay, so what about promoting negative-leaning utilitarianism? This could be a good idea, although again I would make a few provisos.
  • The utilitarian focus should be practical rather than theoretical. There are endless utilitarianism vs. deontology debates, and if we come across as people who just want to push fat guys in front of trains, that's not very helpful, because it just polarizes people into their respective camps. Utilitarianism is often best when it doesn't dwell on things that are both (a) counterintuitive to some people and (b) not really important anyway. We should focus on things that are possibly counterintuitive but only when they're really important, like WAS, SSSS, etc.
  • Along these same lines, the (negative-leaning) utilitarianism (NLU) that we promote should be active. We should outline the principles that guide us, but we should also point to things that people can be doing to make a difference given those principles -- e.g., preventing WAS, SSSS, etc.
So even promoting NLU may end up looking a lot like working on WAS and SSSS; it's just that the framework behind it will be broader and more open to other possibilities that arise.

Now, for better or worse, our new wild-animal organization doesn't take an official stance on utilitarianism vs. other ethical positions, so we can't do this kind of all-out NLU promotion that we might like to do otherwise. But I can do so from the outside, and once the organization is up-and-running for a few years, you can also potentially become an outsider and then shamelessly promote NLU too, citing our organization as an example of something concrete that NLUs can do -- just like NLUs now point toward Vegan Outreach or Effective Animal Activism in the same sort of way.

So we don't have to pick between WAS / SSSS and NLU-promotion. We can do both, but it would be really handy if the WAS organization were set up as one prominent example of what utilitarians can do to help.

Ruairi wrote:If simulated minds will be cheap and easily available then it might not be too hard to encourage *utilitorinium at home*. (But still maybe we should promote concern for them).

I would guess they could eventually become fairly cheap. As you hint, it's another question whether they would be allowed or funded, but that's part of the point of advocacy -- to push for changes in laws.
Note: I think they might be pretty cheap under a small fraction of futuristic scenarios that tend to fall into our conventional modes of thinking. I think these scenarios occupy a tiny proportion of the likely outcomes, though.

Ruairi wrote:So maybe sentient simulations aren't such a huge issue? Or maybe they are, but we should be promoting utilitorinium style scenarios rather than warning against suffering simulations?

I'm going to focus on suffering simulations because I'm more worried about preventing suffering, but I acknowledge that both sides could have decent expected returns.

Ruairi wrote:Another danger might be that the simulations might have a high chance of being sentient but people might not believe it.

Yeah, this is a big part of what SSSS involves -- making people see that these seemingly inhuman algorithms may actually be closer to suffering algorithms in humans than they realize. (This assumes that they actually are close to suffering algorithms in humans. Whether this would be true is unclear. At least, certain types of sims -- like whole-brain emulations (WBEs) -- would probably suffer, and yet maybe without bodies, people wouldn't care as much. That said, I think ethical concern for WBEs is pretty solid, so maybe we don't need to focus on those. I would worry more about the animal-like and insect-like WBEs.)

Ruairi wrote:Anyway I think the future looks sufficiently bad that the best things to do right now and probably for a while is to make new activists in the areas of utilitarianism, antispeciesism, reducing wild animal suffering (RWAS), not spreading sentients to other planets, raising concern for sentient simulations.

Yep!

Ruairi wrote:What futures have a lot of power to make lots of sentients? How likely are they?

Many futures controlled by an AGI could make lots of sentients, although maybe most paperclippers wouldn't make a lot of sentients except for possibly suffering subroutines. I don't know exactly how likely sentient-dense futures are, but if I ignored the doomsday argument, I'd give at least, I don't know, 2% odds for a sentient-dense future.

Here are some totally random numbers for my non-doomsday-updated probabilities:
  • 15% chance of near-term extinction due to non-AI things (e.g., nanotech).
  • 20% chance humans don't develop AGI and hence don't do massive amounts of galactic colonization.
  • 59% chance of a paperclipper or other kind of non-human-controlled force that takes over. (This could still include some high-sentient-density scenarios.)
  • 4% chance humans are in control but don't create high sentient density.
  • 2% chance humans are in control and create high sentient density.

Ruairi wrote:Is the most likely future something we haven't thought of?

Probably, although any future that contains AGI has to retain certain characteristics (the basic AI drives).

Ruairi wrote:What are results of researching future utility?

Well, for one thing, it tells us not to work on reducing extinction risk. It also gives us hints about what kinds of things to worry about, so that we can prepare the ground now. WAS and SSSS are almost entirely about future-of-humanity concerns. Same with promoting NLU. You could argue that nothing is cost-effective unless it affects the future of humanity.

Ruairi wrote:If the technology for helping wild animals happens large scale is made will the technology for making sentient simulations or spreading life to other planets also be made, (because similar things need to be invented?). What does this mean regarding what we should do?

Technology to prevent WAS is more basic (some rudimentary forms are already here), but more sophisticated RWAS tech would probably come along with the same kind of AI that would better understand sentient sims and panspermia. I don't think this point is too relevant, though, because (1) the most important part of RWAS in the long term is to prevent the spread of WAS to other planets or in sims rather than to reduce it on Earth, (2) we're not going to advocate general tech improvement anyway. It's better to focus on values than knowledge or technology, because knowledge and technology will always come along for the ride with more rational future civilizations, but values are arbitrary and hence fragile. Of course, it might be a good idea to encourage WAS-specific tech in our own work (e.g., humane insecticides), but the causal contribution of this to greater ability to create sims is basically zero.

Ruairi wrote:What's the percentage chance the major source of power in the next 100 years is something we haven't predicted? And will that power leading to the control of future major (AI?) power?

Well, what kind of powers do you have in mind? It has to be humans of some sort who create AGI (unless aliens come and give it to us), so spreading better values among humanity is aiming at the right target. Of course, it could be a group of humans who cares zilch about what we have to say, but there's not much we can do about that. Also, the AGI may very likely become a paperclipper and not do what the humans were intending, but that may be fine.

Ruairi wrote:Sorry to brain-vomit again. I don't wanna suffer from status quo bias and stick with RWAS because I like RWAS activism :)

Yep -- makes sense!
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Ruairi on 2013-01-26T17:36:00

"general utilitarians sometimes use a crazily optimistic pain:pleasure exchange rate that IMHO doesn't realize how serious extreme suffering is."

Yea D:

So even promoting NLU may end up looking a lot like working on WAS and SSSS; it's just that the framework behind it will be broader and more open to other possibilities that arise.

Yea, but I like that nice cuddly, calculating, similar to my values framework!:D!

Probably, although any future that contains AGI has to retain certain characteristics (the basic AI drives).

Cool, thanks for the link :)

Well, what kind of powers do you have in mind?

Unknown unknown ones :/ ...
User avatar
Ruairi
 
Posts: 392
Joined: Tue May 10, 2011 12:39 pm
Location: Ireland

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2013-01-27T00:36:00

Brian Tomasik wrote:
Ruairi wrote:So maybe sentient simulations aren't such a huge issue? Or maybe they are, but we should be promoting utilitorinium style scenarios rather than warning against suffering simulations?

I'm going to focus on suffering simulations because I'm more worried about preventing suffering, but I acknowledge that both sides could have decent expected returns.

There's a nice overlap in the sense that the resources, space, computing cycles, competitive niche etc. that are occupied by happy sentients cannot at the same time be occupied by suffering sentients. This is also why hedonic enhancement is a good NU idea rather than just a good TU idea.

Some people cannot be effectively convinced that less life is better than more life; focussing their attention on the possibility of better life can still have a palliative effect in the universe.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2013-01-27T01:14:00

Hedonic Treader wrote:There's a nice overlap in the sense that the resources, space, computing cycles, competitive niche etc. that are occupied by happy sentients cannot at the same time be occupied by suffering sentients. This is also why hedonic enhancement is a good NU idea rather than just a good TU idea.

But the kinds of conditions that would allow for happy sentients will also allow for suffering sentients. The same technology that allows for astronomical amounts of computing power will allow for astronomical numbers of suffering computing cycles. Given a fixed capacity for sentient sims, we should totally prefer the happy ones to the unhappy ones. But when we can affect whether this computing capacity comes into existence in the first place, that may be a higher point of leverage.

Hedonic Treader wrote:Some people cannot be effectively convinced that less life is better than more life; focussing their attention on the possibility of better life can still have a palliative effect in the universe.

One example may be wild animals: Many people don't like the idea of getting rid of wildlife entirely, but they might not be averse to gradients-of-bliss wildlife. (Some would be averse to even that. Ned Hettinger: "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.")
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2013-01-27T01:58:00

Brian Tomasik wrote:Given a fixed capacity for sentient sims, we should totally prefer the happy ones to the unhappy ones. But when we can affect whether this computing capacity comes into existence in the first place, that may be a higher point of leverage.

Yes, that's clear, if it can in fact be affected. Note that this is the argument I made a while back about the negative externalities of being active as an engineer, even if you donate all your money. It's just not so clear to me what the relative effect sizes are. In a way, RWAS or SSSS advocacy is a form of prefering happy beings to unhappy ones; after all, these advocacy types don't necessarily change the expectation value of total sentience.

Hedonic Treader wrote:Some people cannot be effectively convinced that less life is better than more life; focussing their attention on the possibility of better life can still have a palliative effect in the universe.

One example may be wild animals: Many people don't like the idea of getting rid of wildlife entirely, but they might not be averse to gradients-of-bliss wildlife. (Some would be averse to even that. Ned Hettinger: "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.")

Yes. My experience is that the same people who are against getting rid of wildlife are also against meddling with its nature - strongly. But there may be a lot of people midly in favor of wildlife, but could be convinced to be more in favor of happier managed life (e.g. humans and their pets, well-managed zoos etc), especially if it is framed at a resource efficiency (and sometimes speciesist) distinction. Another example would be posthumans after hedonic enhancement vs. posthumans without hedonic enhancement.

Generally, there is probably a category of people who aren't reached by suffering-focussed compassion appeals very much, but who could be reached by visions of more attractive good life that also happens to contain much less suffering. It seems more prudent to reach these people on that angle; if the empathy isn't there, they won't be talked into it, but this doesn't mean we can't influence them at all.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2013-01-27T06:29:00

Hedonic Treader wrote:Note that this is the argument I made a while back about the negative externalities of being active as an engineer, even if you donate all your money.

What I had in mind by ways to "affect whether this computing capacity comes into existence in the first place" was mainly not working to reduce extinction risk. I don't think anything else has much of an effect. Even if there weren't a short-term career replaceability issue with being an engineer (which there is), there would be a long-term one: If humanity survives, it will eventually figure these things out one way or another, and the only difference I make is to hasten that along ever-so-slightly. Hastening things along by a tiny amount is insignificant compared against the (very small but nonzero) chance of locking in your values until the stars die out.

Hedonic Treader wrote:In a way, RWAS or SSSS advocacy is a form of prefering happy beings to unhappy ones; after all, these advocacy types don't necessarily change the expectation value of total sentience.

Yes (except, maybe, insofar as these things might give some people pause about working to reduce extinction risk).

Hedonic Treader wrote:Generally, there is probably a category of people who aren't reached by suffering-focussed compassion appeals very much, but who could be reached by visions of more attractive good life that also happens to contain much less suffering.

Yeah. In some cases, the stereotype is kind of true that Singularity people make the Singularity into their religion and want to believe that they'll go to heaven and such, even if it's extremely unlikely. I'm not saying this is true of most SIAI people, or even most Singularity fans. But it's sometimes true, and in any event, wishful thinking can be tempting for all of us.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2013-01-27T15:36:00

Brian Tomasik wrote:Hastening things along by a tiny amount is insignificant compared against the (very small but nonzero) chance of locking in your values until the stars die out.

True, but there is also a (very small but nonzero) chance that hastening things along bridges the gap between "that other extinction risk hits before foom" vs. "that other extinction risk doesn't hit before foom". I don't know how big that is.

I agree one engineer doesn't shift that much, but it would still be relevant if we didn't think there are low-hanging advocacy fruit that are higher leverage. Compassion advocacy has the downside that it relies on people being benevolent and able to change their minds at the same time. Nonzero effect, sure, but still small.

Hedonic Treader wrote:In a way, RWAS or SSSS advocacy is a form of prefering happy beings to unhappy ones; after all, these advocacy types don't necessarily change the expectation value of total sentience.

Yes (except, maybe, insofar as these things might give some people pause about working to reduce extinction risk).

Yes, or to create at least some motivation to shift resources to non-sentient patterns (e.g. material consumption).
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2013-01-28T10:18:00

Hedonic Treader wrote:True, but there is also a (very small but nonzero) chance that hastening things along bridges the gap between "that other extinction risk hits before foom" vs. "that other extinction risk doesn't hit before foom".

It's not clear to me whether hastening tech progress increases or decreases extinction risk. Yes, it could work as you suggested, but it could also work the other way: Faster tech progress means less time to develop safety regulations, less time to figure out the AI control problem, etc. Except for asteroids and supervolcanos (which are very unlikely relative to other things), the risks that humanity faces are caused by tech progress, and if we have faster tech progress, the risks come at us faster. It's like the question of whether it hurts less to pull a Band-Aid off slowly or quickly.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2013-01-28T15:05:00

Brian Tomasik wrote:Yes, it could work as you suggested, but it could also work the other way: Faster tech progress means less time to develop safety regulations, less time to figure out the AI control problem, etc.

Activities that also require science and engineering manpower. If we make more competent manpower available, these activities become cheaper.

Except for asteroids and supervolcanos (which are very unlikely relative to other things), the risks that humanity faces are caused by tech progress, and if we have faster tech progress, the risks come at us faster.

You're right that asteroids and supervolcanos are unlikely. I would say pandemics are somewhat more likely. And there is the possibility of a resource valley, ie. that fossil fuels become more expensive and it takes additional time to switch to alterntives (even if we had no science and engineering progress, normal life support would still burn at unsustainable resources). Thus a delay could have a multiplier by getting us into that valley. Factors like climate change, demographic instability etc.could also have a delaying effect, but they are probably too slow.

I admit it's a shaky model.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2013-02-06T03:42:00

EDIT: Comment withdrawn due to logical errors and problematic endorsements.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Arepo on 2013-02-06T12:41:00

Haven't seen that blog before. Is it yours?
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: A few dystopic future scenarios

Postby Hedonic Treader on 2013-02-07T00:14:00

Arepo wrote:Haven't seen that blog before. Is it yours?

No, I think it's the blog of felicifia member Hutch.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2013-03-24T06:04:00

If I had to make an estimate now, I would give ~75% probability that space colonization will cause more suffering than it reduces. A friend asked me to explain the components, so here goes.

Consider how space colonization could plausibly reduce suffering. For most of those mechanisms, it seems at least as likely that they will increase suffering. The following sections parallel those from the opening post of this thread.

Spread of wild-animal life

David Pearce coined the phrase "cosmic rescue missions" in referring to the possibility of sending probes to other planets to alleviate the wild extraterrestrial (ET) suffering they contain. This is a nice idea, but there are a few problems.
  • We haven't found any ETs yet, so it's not obvious there are vast numbers of them waiting to be saved from Darwinian misery.
  • The specific kind of conscious suffering known to Earth-bound animal life is probably quite rare. Most likely ETs would be bacteria, plants, etc., and even if they're intelligent, they would likely be intelligent in the way robots are without having emotions of the sort that we care about.
  • Space travel is slow and difficult.
  • It's unclear whether humanity would support such missions. Environmentalists would ask us to leave ET habitats alone. Others wouldn't want to spend the resources to do this unless they planned to mine resources from those planets in a colonization wave.
Contrast this with the possibilities for spreading wild-animal suffering:
  • We could spread life to many planets (e.g., Mars via terraforming, other Earth-like planets via directed panspermia). The number of planets that can support life may be appreciably bigger than the number that already have it. (See the discussion of f_l in the Drake equation.)
  • We already know that Earth-bound life is sentient, unlike for ETs.
  • Spreading biological life is slow and difficult like rescuing it, but disbursing small life-producing capsules is easier than dispatching Hedonistic Imperative probes or berserker probes.
  • Fortunately, humans might not support spread of life that much, though some do. For terraforming, there are obvious survival pressures to do it in the near term, but probably directed panspermia is a bigger problem in the long term, and that seems more of a hobbyist enterprise.
Sentient simulations

It may be that biological suffering is a drop in the bucket compared with digital suffering. Maybe there are ETs running sims of nature for science / amusement, or of minds in general for psychological, evolutionary, etc. reasons. Maybe we could trade with them to make sure they don't cause unnecessary suffering to their sims. If empathy is an accident of human evolution, then humans are more likely empathetic than a random ET civilization, so it's possible that there would be room for improvement through this type of trade.

Of course, post-humans themselves might run the same kinds of sims. What's worse: The sims that post-humans run would be much more likely to be sentient than those run by random ETs because post-humans would have a tendency to simulate things closer to themselves in mind-space. They might run ancestor sims for fun, nature sims for aesthetic appreciation, lab sims for science experiments, pet sims for pets. Sadists might run tortured sims. In paperclip-maximizer world, sadists may run sims of paperclips getting destroyed, but that's not a concern to me.

Finally, we don't know if there even are aliens out there to trade with on suffering reduction. We do, however, know that post-humans would likely run such sims if they colonize space.

Suffering subroutines

A similar comparison applies here as far as humans likely being more empathetic than average, but humans also being more likely to run these kinds of things in general. Maybe the increased likelihood of humans running suffering subroutines is less than of them running sentient simulations because suffering subroutines are accidental. Still, the point remains that we don't know if there are ETs to trade with.

Lab universes

Many humans seem interested in creating lab universes, which would be infinitely bad (pending your view on Carl's point about measure in an infinite multiverse). Of course, humans could also prevent ETs from creating them, but we don't know if such ETs exist or can be reasoned with.

Savage ideologies

Here it's once again a contrast between humans being more likely empathetic vs. humans being more likely to simulate things we would regard as bad. The latter concern seems to significantly dominate here. Simulating paperclips melting in a lake of fire is probably ok by me.

Torture as warfare

Same kinds of tradeoffs as for savage ideologies.

What about paperclippers?

Above I was largely assuming a human-oriented civilization with values that we recognize. But what if, as seems mildly likely, human colonization accidentally takes the form of a paperclip maximizer? Wouldn't that be a good thing because it would eliminate wild ET suffering as the paperclipper spread throughout the galaxy, without causing any additional suffering?

Maybe, but if the paperclip maximizer is actually generally intelligent, then it won't stop at tiling the solar system with paperclips. It will have the basic AI drives and will want to do science, learn about other minds via simulations, engage in conflict, possibly run suffering subroutines, and of course, create lab universes that will themselves be full of paperclips. It's not obvious whether a paperclipper is better or worse than a "friendly AI."

Evidential/timeless decision theory

We've seen that the main way in which human space colonization could plausibly reduce more suffering than it creates would be if it allowed us to prevent ETs from doing things we don't like. However, if you're an evidential or timeless decision theorist, an additional mechanism by which we might affect ETs' choices is through our own choices. If our minds work in similar enough ways to ETs', then if we choose not to colonize, that makes it more likely / timelessly causes them also not to colonize, which means that they won't cause astronomical suffering either. (See, for instance, pp. 14-15 of Paul Almond's article on evidential decision theory.)

It's also true that if we would have done net good by policing rogue ETs, then our mind-kin might have also done net good in that way, in which case failing to colonize would be unfortunate. But while many ETs may be similar to us in failing to colonize space, fewer would probably be similar to us to the level of detail of colonizing space and carrying a big stick with respect to galactic suffering. So it seems plausible that the evidential/timeless considerations asymmetrically multiply the possible badness of colonization more than the possible goodness of it?

Black swans

It seems pretty likely to me that suffering in the future will be dominated by something totally unexpected. This could be a new discovery in physics, neuroscience, or even philosophy more generally. Some make the argument that because we know so very little now, it's better for humans to stick around for the option value: If they later realize it's bad to spread, they can stop, but if they realize they should, they can proceed and reduce suffering in some novel way that we haven't anticipated.

Of course, the problem with the "option value" argument is that it assumes future humans do the right thing, when in fact, based on examples of speculations we can imagine now, it seems future humans would probably do the wrong thing most of the time. For instance, if it's possible to create lab universes, it's easy to imagine that many post-humans would want to do so, possibly in order to spread happiness, beauty, and paperclips to additional realms, or possibly just for fun, curiosity, or scientific value. Faced with a new discovery of obscene amounts of computing power somewhere, most humans would use it to run oodles more minds, some nontrivial fraction of which might suffer terribly. In general, most sources of immense power are double-edged swords that can create more happiness and more suffering, and the typical human impulse to promote life/consciousness rather than to remove them suggests that negative and negative-leaning utilitarians are on the losing side.

Why not wait a little longer just to be sure that a superintelligent post-human civilization is net bad in expected value? Certainly we should research the question in greater depth, but we also can't delay acting upon what we know now, because within a few decades, our actions might come too late. Tempering enthusiasm for a technological future needs to come soon or else potentially never.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Brian Tomasik on 2013-07-29T07:47:00

I've had updates recently to some of my views on potential futures.

I now have a higher probability that consciousness is convergent among intelligent civilizations. This is based on (a) suggestions of convergent consciousness among animals on Earth and (b) the general principle that consciousness seems to be useful for planning, manipulating images, self-modeling, etc. But maybe this reflects the paucity of my human imagination in conceiving of ways to be intelligent without consciousness. :)

It should be noted that even if ETs are conscious, they may not have emotions that we care about. Emotions seem to be modes of operation that guide an organism to do certain things at certain times (eat, sleep, have sex, run away, cry, etc.). They're also related to reinforcement learning. It seems neither of these would be necessary in an AI that's not an agent in quite the way animals are. For instance, imagine an AI that computes expected values for actions without "feeling" the rewards and punishments directly. The updates to its model could happen through model-based computations rather than standard reinforcement learning. Of course, it may be parochialism that leads me to say these more abstract model updates are not also a form of reward/punishment, but it's not clear the AI would have the same kinds of reactions to them that humans have to their reward/punishment updates.

Anyway, if conscious emotions are convergent among ETs, that changes some of the analysis. Wild ET life would be more likely suffering, and the simulations that ETs would run would be more likely to have suffering as well. And ETs would have their own black swans where they'd likely make the wrong choices, just like Earth-based life probably would.

I think it's likely that humans are more empathetic than the average conscious civilization because (a) we seem much more empathetic than the average animal on Earth, probably in part due to parental impulses and in part due to trade, although presumably some of these factors would necessarily be true of any technologically advanced civilization and (b) selection bias implies that we'll agree with our own society's morals more than those of a random other society because these are the values that we were raised with and that our biology impels us toward.

On balance, it's not obvious how our assessment changes if we think ETs are likely conscious. Also remember that we don't know if ETs exist in accessible regions of the universe.

----

It's worth remembering that the factors I listed don't seem to matter much in the event that human values as we know them lose control of the future, which seems a decently likely outcome.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA


Return to General discussion