Doomsday Reasoning

Whether it's pushpin, poetry or neither, you can discuss it here.

Doomsday Reasoning

Postby Arepo on 2011-06-01T14:13:00

1) The life expectancy for newborn babies is horrifically low
2) Conversely, decagenarians suffering from the final stages of a fatal illness are very unlikely to die any time soon
3) If we were to leave Earth and discover other spacefaring civilisations, but none who’d emerged more than a few millennia before us, no matter how widespread they were we could confidently conclude that all such civilizations would all somehow lose the ability to travel through space within a few millennia, presumably due some technology making the entire universe becoming incapable of supporting life

I had a dream about #3 the other day. Wonder if it was trying to tell me something...
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Doomsday Reasoning

Postby tog on 2011-07-25T10:16:00

Not sure I quite get the connect between 1, 2 and 3 Sasha!
User avatar
tog
 
Posts: 76
Joined: Thu Nov 25, 2010 10:58 am

Re: Doomsday Reasoning

Postby Arepo on 2011-07-25T13:19:00

I wouldn't take it too seriously - I was in an anti-Bostrom mood when I wrote it, and wasn't exactly aiming for a substantial argument. You're familiar with the Doomsday Argument? To wit, if you take a random point on a finite line, its a priori likelihood of lying equal to or less than x% of the distance from whichever end you point to is x%. Bostrom applies this to human existence by using a suspect 'random' starting point, an ill-defined line, and, most bizarrely, by all but ignoring the 'a priori' caveat.

Using similarly lax reasoning, I'm claiming the above conclusions hold (in 1, the line is a person's lifespan, the point is near the beginning of the familiar aging process; in 2 the line is the same, the point is near the end of the familiar aging process; in 3, the line is the time in which the universe hosts spacefaring civilizations, the point is where they've just begun to emerge, which we typically think of as the point at which short and medium term existential risk massively drops).
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Doomsday Reasoning

Postby DanielLC on 2011-07-25T16:18:00

I did it with an uninformed prior and got the same answer. As for the random starting point, you don't know who you are a priori, and "random" just means you don't know the answer. The line is just anything you can't eliminate a priori. That is, anything that can think. I admit he messed up on that one. Anything that can think isn't just humans.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Doomsday Reasoning

Postby Arepo on 2011-07-25T17:00:00

DanielLC wrote:I did it with an uninformed prior and got the same answer.


Not sure what you mean. This seems equivalent to a random point, given precisely the point you make that it just implies lack of knowledge. My main complaint about the DA is we have *huge* amounts of relevant knowledge, which it completely ignores.

As for the random starting point, you don't know who you are a priori


But you're not selecting yourself A Priori. You're doing it with a lot of contextual knowledge.

The line is just anything you can't eliminate a priori. That is, anything that can think. I admit he messed up on that one. Anything that can think isn't just humans.


Well sure, that's *a* line. I guess it's the one most relevant to utilitarians (or rather, anything that can feel emotion) - is that what you meant?
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Doomsday Reasoning

Postby DanielLC on 2011-07-25T20:47:00

My main complaint about the DA is we have *huge* amounts of relevant knowledge, which it completely ignores.


People tend to notice evidence pointed one way more than evidence pointed another. There is so much evidence that huge amounts can be found in any given direction. As such, if looks like there's a 99.999% chance that we'll survive past whenever, there's a significant change that it just looks that way because of your own bias. If you take this into account, you get something more like 99%, if that. Given extreme enough values, the doomsday argument can easily get probabilities well below 1%. As such, pretty much no matter how much evidence you think you have beforehand, you can conclude that we're probably at least 1% of the way through intelligent life.

But you're not selecting yourself A Priori. You're doing it with a lot of contextual knowledge.


That's what you'd do if you want to find out who you are. Once you do, you can use this knowledge to update other things.

For example, imagine there's a coin, and you want to know if it's a trick coin that always lands on heads. You flip it, and it lands on heads. You have contextual knowledge to figure out that it landed on heads, but the prior probabilities still matter. Namely: it's twice as likely to land on heads given that it's a trick coin, so now the odds that it's a trick coin have doubled.

I guess it's the one most relevant to utilitarians (or rather, anything that can feel emotion) - is that what you meant?


No. This has nothing to do with utilitarianism.

You have enough evidence to conclude that you are you, but how much did it actually take? It must have taken some. You can't just accept that it's true.

If you know that there's only two people, it only takes one bit of evidence to figure out which you are. If there's 65,576 people, it will take sixteen bits. As such, once you answer the question of what the probability of being "you" is, given only the universe (and it's one that contains said "you"), you've effectively answered the question of how many "people" are in it.

For what it's worth, I've also tried it with other things like assuming that all you know is that there exists such a person, and not using that you are them as evidence. You have to change the prior to get reasonable results (if you use the logarithmic prior, you end up concluding that there's an obscenely huge number of people, just on the basis that the more there are the more likely that there's one like that), but once you do, you get the same answer.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Doomsday Reasoning

Postby Arepo on 2011-07-26T12:02:00

DanielLC wrote:People tend to notice evidence pointed one way more than evidence pointed another. There is so much evidence that huge amounts can be found in any given direction. As such, if looks like there's a 99.999% chance that we'll survive past whenever, there's a significant change that it just looks that way because of your own bias. If you take this into account, you get something more like 99%, if that. Given extreme enough values, the doomsday argument can easily get probabilities well below 1%. As such, pretty much no matter how much evidence you think you have beforehand, you can conclude that we're probably at least 1% of the way through intelligent life.


Having cognitive bias doesn’t mean it’s impossible to come to strong conclusions, it just means we should be more sceptical of them when we do. It’s perfectly conceivable that the evidence could show persuasively enough that we’re probably not at least 1% of the way down the track. I happen to think it doesn’t (or rather, that it doesn’t show that for the existence of our descendants), but either way this seems to have very little to do with Doomsday reasoning, which is just another piece of evidence to add to the mountains we already have – but a very small datum in a very large ocean.

In any case, this argument can be applied to any conclusion, reduction ad something resembling absurdum – we’re overly confident in our understanding of maths, for eg, and we’re overly confident in our belief that maths/probability even works – but if there’s any probability that it doesn’t, in theory every analysis immediately breaks. I actually think this is true, but for the purpose of discussions like this, we just have to assume that we can have very high confidence in our conclusions when the evidence seems to merit it.

As such, once you answer the question of what the probability of being "you" is, given only the universe (and it's one that contains said "you"), you've effectively answered the question of how many "people" are in it.


It sounds to me like that’s because it’s a different way of phrasing the same question (ignoring this stuff about ‘being me’, which is nonsensical to me – at any given point, ‘I’ is just a name for the incidental self-awareness of a particular subset of particles, not significant as an entity or even an entity at all from the perspective of the building blocks of the universe). Knowing what probability there is of picking one of N individuals requires knowing what the value of N is, which is exactly the question the DA purports to be answering.

I don’t know if I’ve properly understood your points. I often find your writing style difficult to follow (which is a shame, since nine times out of ten when I do manage to, I end up agreeing with you)…
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Doomsday Reasoning

Postby DanielLC on 2011-07-26T18:14:00

In any case, this argument can be applied to any conclusion, reduction ad something resembling absurdum – we’re overly confident in our understanding of maths, for eg, and we’re overly confident in our belief that maths/probability even works


True. This certainly applies to the doomsday argument itself. I found multiple errors in my version of it after posting it on Less Wrong, including not realizing that other worlds changes it, for example. The thing is, the Doomsday argument is based on simple principles, where if someone finds a flaw, it becomes obvious. Knowing how powerful existential dangers such as unfriendly AI, gray goo, biological warfare, etc. are is much more likely to be flawed. We know practically nothing about them.

Basically, we can be orders of magnitude more certain about what's essentially math than about dangers we've never experienced.

It’s perfectly conceivable that the evidence could show persuasively enough that we’re probably not at least 1% of the way down the track.


The expected evidence is 1%. It's conceivable, but as you get higher and higher populations, the expected evidence goes down. At some point, it's simply more likely that you misunderstood the evidence.

ignoring this stuff about ‘being me’, which is nonsensical to me – at any given point, ‘I’ is just a name for the incidental self-awareness of a particular subset of particles, not significant as an entity or even an entity at all from the perspective of the building blocks of the universe


I'm not saying it is significant which incidental self-awareness it is. I'm just saying that it has some uncertainty.

In any case, as I've already said, only taking into account that said incidental self-awareness actually exists gives either pretty much the same answer, or that the world as we know it is a hallucination and you really aren't that early, depending on your prior.

Actually, thinking about it more, you'd conclude that there's an infinite number of planets, and the lifetime of yours is what the normal doomsday argument predicts.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Doomsday Reasoning

Postby Arepo on 2011-07-26T21:05:00

DanielLC wrote:
Basically, we can be orders of magnitude more certain about what's essentially math than about dangers we've never experienced.


We can be far more certain about how much of its apparent evidential weight it retains, but that doesn't mean it had much to begin with. I have no idea what the probability is that the sun will set between 15 and 16 hours after it rises in Seattle tomorrow. But if you ask me 30 seconds after it's risen, I'll tell you I think it's massively more likely to be in that order than within 3000 seconds.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Doomsday Reasoning

Postby DanielLC on 2011-07-26T23:13:00

We can be far more certain about how much of its apparent evidential weight it retains, but that doesn't mean it had much to begin with.


It can have a lot. Do you want to check my math?

I have no idea what the probability is that the sun will set between 15 and 16 hours after it rises in Seattle tomorrow. But if you ask me 30 seconds after it's risen, I'll tell you I think it's massively more likely to be in that order than within 3000 seconds.


You have a prior, and it's not the logarithmic prior. You know it will be somewhere in the vicinity of 12 hours. You know it will definitely be less than 24 hours. You know all of this because you have a model of the day/night cycle that you have an extremely high amount of evidence for. There are certainly things you don't know. You might not know the latitude of Seattle, or the tilt of the Earth, but even those you have far better than an uninformed prior.

You have a lot of information here. You have a pretty good idea of what your uncertainty is. That's not true of when the world is going to end. You might be able to list a few things that could do it, but you don't know how feasible they are.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Doomsday Reasoning

Postby Arepo on 2011-07-27T11:07:00

It can have a lot. Do you want to check my math?

Yeah, please post it. I doubt I’ll be able to check it myself, but I have a few friends who often complain that advocates of the DA never seem to lay it out as a complete mathematical argument.

You know all of this because you have a model of the day/night cycle that you have an extremely high amount of evidence for. There are certainly things you don't know. You might not know the latitude of Seattle, or the tilt of the Earth, but even those you have far better than an uninformed prior.


This is all true of the possibility of the end of sentience. The problem isn’t too little information, but too much of it. Where it's relevant at all, we need to be looking for ways to analyse it, not ways to ignore it.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Doomsday Reasoning

Postby DanielLC on 2011-07-28T01:20:00

My math is in the article I linked to earlier: Bayesian Doomsday Argument. I'd have to agree with your friends. I've never seen this done well.

In particular, I've never seen anyone take into account the fact that there's multiple planets, each with their own population. I didn't write it very well, but it works out that if you want to know the total population of the lineage of the average planet, you have to multiply by the portion of planets that have made it this far (100 billion people), and renormalize. I'm not actually sure how big a difference that makes. I know it's noticeable, and it's not going to be enough to destroy the argument. I'm not sure what prior to use.

The problem isn’t too little information, but too much of it.


We have not sufficiently analyzed the information. Some say we'll almost definitely last until the heat death of the universe. Others say we'll almost definitely die within a few generations when the creation of nuke-level super-weapons becomes easy. To the extent that we haven't analyzed it, it follows the law of conservation of expected evidence. The expected value of our belief after we analyze the evidence is exactly what it is before. I can confidently predict that any evidence I get of the lifetime of humanity will point towards it being somewhere in the range that I currently find plausable.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm


Return to General discussion