Doomsday dampening of future fanaticism?

Whether it's pushpin, poetry or neither, you can discuss it here.

Doomsday dampening of future fanaticism?

Postby Brian Tomasik on 2013-02-02T08:12:00

The following topic has come up before, but I'm ashamed to say that either I haven't thought it through in depth or if I did on a prior occasion, I forgot what my conclusion was. :) So maybe you all can help me out.

The idea

Define "future fanaticism" as the tendency to assume that influencing the future will always matter more than doing short-sighted altruism in the present. This seems like a reasonable position to adopt for risk-neutral consequentialists.

But then add in anthropic reasoning: If there are really going to be so many minds in the future for us to affect (e.g., by making sure they don't suffer as much), why would we find ourselves in such an early and influential position in history? This is what I call "Doomsday dampening" of future fanaticism.

Simple example: Before doing anthropics, you think there's
(a) a 50% chance that self-aware life will go extinct very soon and
(b) a 50% chance that there will be 1000 times as many minds as have yet existed.
For convenience, group observer-moments into chunks: One chunk for everything so far, and 999 other chunks of the same size for all future observer moments if (b) happens. Given (a), the probability of being in the first chunk is 1. Given (b), it's 0.001. So the posterior probabilities for (a) and (b) are 0.999 and 0.001, respectively.

Without anthropics, it had seemed like if (b) was true, you might be able to influence a whole bunch of future mind-moment chunks, so even if your influence over the future was small (say, 5% of your influence over the present), working on future stuff would have been worthwhile because 1000 * 5% = 50, which is still more than the one observer-moment chunk you'd be affecting if (a) was true. But if we use the Doomsday probabilities, the expected impacts are now 1 for working on stuff in the present versus 0.001 * 50 = 0.05 for working on stuff in the future.

This point doesn't seem obviously wrong. I believe DanielLC endorses something like the view I described. The idea is similar to Robin Hanson's resolution of Pascal's mugging in the original Overcoming Bias post on the subject. I think the idea was that if the mugger actually simulated 3^^^^3 people, it would be extremely unlikely you'd find yourself being the single person who influences the real mugger. There would be astronomically many more fake muggers in those sims who would do fake muggings, and you'd almost certainly be confronting one of them, in a way that scales with the number of people to be simulated. (Even if the simulated people were separated from each other and so couldn't actually mug each other, you would still have people dreaming about muggers, insane people thinking they're being mugged, etc., and these still scale with the number of people simulated.)

Ways out of the argument

Some possibilities:
  1. Minds not in our reference class. On the Pascal's-mugging thread, Michael Vassar proposed 3^^^^^3 pigs instead of 3^^^^3 humans. I think the idea was that the pigs would not be in our reference class (why not, though?), so we wouldn't update against actually being the single human whose decision would influence such huge numbers of other humans. But if pigs are still sentient, then we can still affect the well-being of gigantic numbers of them. For future fanaticism, replace "pigs" by "suffering wild animals in the future whose existence we'd like to prevent," "suffering subroutines," etc.
  2. Throw out reference classes. Some have suggested getting rid of reference classes in general, because reference classes don't correspond to anything physical. They're arbitrary and confusing. Without reference classes, we wouldn't necessarily do the Doomsday update.
  3. Model uncertainty. Even if the above argument goes through, we know so little about anthropics that we should reserve decent probability that we're wrong, and if we are, one default fallback position is to rely on the ostensible fact that we are indeed at an influential point in history.
  4. Adopt SIA. SIA can cancel the Doomsday argument, because the prior probability of case (b) is then 1000 times as much as case (a), so that even after updating, the probabilities remain on par. I'm not sure if I buy SIA, but it does seem plausible, especially if modal realism is true, because in a modal-realism multiverse, the more intuitive SSA implies SIA.

Solipsism?

I was wondering: Suppose you buy the argument against future fanaticism. Could the same argument be made for solipsism? For example, say the probability of solipsism is more than 1 in 7 billion. Then, given that you find yourself being you, the posterior for solipsism is higher than the posterior for there actually being 7 billion people on the planet. I think what goes wrong here is that "solipsism" as a hypothesis isn't specific enough: It doesn't say which among the 7 billion possible people you actually are. So even given solipsism, there's a 1 in 7 billion chance that you'll be the specific person that you are rather than someone else. In this case, there's no update in favor of solipsism.

The same update cancellation doesn't work for the Doomsday argument, because the first one-thousandth of humanity in scenario (b) really is special in a way that you out of the 7 billion people in the world are not. But maybe a similar intuition could be applied to motivate SIA: Among the 1000 possible observer-moment chunks, why would it be this one instead of another one that's the only one to exist before extinction? This last idea would need more elaboration to work; for example, would it only apply to ensembles of all possible universes?

My current stance

I think the combination of objections 1-4 seems fairly compelling, perhaps especially the SIA counterargument, since I give decent (~50%?) probability to SIA.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Doomsday dampening of future fanaticism?

Postby CarlShulman on 2013-02-04T00:05:00

in a modal-realism multiverse, the more intuitive SSA implies SIA.


This is false. Modal realism only says all logically possible worlds exist, not all worlds which you think may be logically possible. SIA applies to your uncertainty about which worlds are logically possible.

CarlShulman
 
Posts: 32
Joined: Thu May 07, 2009 2:01 pm


Re: Doomsday dampening of future fanaticism?

Postby Brian Tomasik on 2013-03-11T15:36:00

CarlShulman wrote:SIA applies to your uncertainty about which worlds are logically possible.

Thanks, Carl! Could you explain this more? My naive understanding of SIA is that it applies to your uncertainty about which observer you are within all the worlds that exist, not about which worlds exist in the first place.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Doomsday dampening of future fanaticism?

Postby Humphrey Schneider on 2013-03-24T21:16:00

Solipsism?

I was wondering: Suppose you buy the argument against future fanaticism. Could the same argument be made for solipsism? For example, say the probability of solipsism is more than 1 in 7 billion. Then, given that you find yourself being you, the posterior for solipsism is higher than the posterior for there actually being 7 billion people on the planet. I think what goes wrong here is that "solipsism" as a hypothesis isn't specific enough: It doesn't say which among the 7 billion possible people you actually are. So even given solipsism, there's a 1 in 7 billion chance that you'll be the specific person that you are rather than someone else. In this case, there's no update in favor of solipsism.


I don't understand this. How high is the posterior for there actually being 7 billion people on the planet? I don't know why you take about a certain probability of being you. If you were someone completely else you, would also wonder why you were you and nobody else. I think every self-aware mind would do this. What sense does these though make if you do not believe in personal identity. There's just a feeling of self-awareness existing. There might be some more feelings of self-awareness existing right now. But what if the question "Why I am me?" was nonsense? It's hard to imagine but I suppose it could be right nevertheless.
"The idea of a necessary evil is necessarily the root of all evil"

Humphrey Schneider
 
Posts: 36
Joined: Wed Jan 02, 2013 7:04 pm


Return to General discussion