Evidential decision theory and the doomsday argument

Whether it's pushpin, poetry or neither, you can discuss it here.

Evidential decision theory and the doomsday argument

Postby Brian Tomasik on 2009-02-18T20:44:00

The following may be relevant to those interested in existential risk. My point seems straightforward enough that it's probably not original, but I haven't seen it mentioned before.

I usually think of the doomsday argument as an unfortunate idea but not something that really affects my actions. It's too bad that, if the argument goes through, we have evidence against the long-term survival of humanity, but we just have to accept that fact and do the best we can to prevent human extinction against what the argument claims are tough odds.

That attitude approaches the problem from the standpoint of "causal decision theory" (CDT): Evaluating actions in terms of the causal impact that they will have to make things better. However, there's another approach, called "evidential decision theory" (EDT), which says you should do the action that, if chosen, would give you the best expectations about the outcome. While I find it far less intuitive than CDT, EDT seems to give the right answers in certain situations and so may be worth considering.

Applied to the doomsday argument, this suggests that existential-risk reducers (I'll call them "ERRs") might engage in actions that ameliorate the improbability of their being born so early in human history. For instance, if they take the relevant reference class to be the set of all lives of conscious organisms regardless of length, then ERRs could aim to extend lifespans while reducing birth rates. If the reference class includes biological humans but not post-humans, ERRs could hasten the transition to mind uploading. If the reference class is based on numbers of conscious minds, ERRs could lobby for fewer, larger post-human brains rather than disbursed post-human brains. (However, this outcome might itself be undesirable, if the utility that a single mind can experience is limited.) Obviously, the question of what reference class to use is crucial.

Even if these suggestions make theoretical sense, they aren't obviously cost-effective. It's arguably much more efficient to work on averting global catastrophes directly than to change patterns of human and post-human demographics in such a way that we'll be sufficiently less surprised by our early existence that we significantly lower our estimate of extinction risk. Is this true across the board? Or are there any purely evidential risk-reduction approaches that might be cost-effective?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Evidential decision theory and the doomsday argument

Postby Arepo on 2009-02-19T23:11:00

Hi Alan, glad to see you back here. I started reading the piece on EDT and gave up since it was using unkeyed symbols (one being Sigma) whose meaning I'm uncertain of. I'll try and have another go at some point (if I commit to a time it'll make a liar of me, so let's assume not in the foreseeable future).

Meanwhile, I suspect this thread is going to degenerate into a discussion of the doomsday argument. Mainly because I'm going to degenerate it:

I see no reason to take the DA seriously. While I recognise that Bostrom and its proponents recognise that my reaction is common, their awareness alone doesn't devalue my conclusion.

What's more, I think an argument from authority applies - I've shown this to three Oxbridge postgrad-qualified mathematicians (ie more qualified in maths than Bostrom is), and all of them have shared my reaction. Obviously this proves nothing, but when my own logic as far as I'm capable of extending it leads me to the same view that people who are the most authoritative on an issue agree on, I don't see any sane reason to worry about the issue too much. If someone posts a counterargument I'll read it, but I'm unimpressed by Bostrom's pronouncements that all these people are overlooking something that I've yet to hear him articulate.

Anyway, my objection is basically the old commonplace that we don't live in an abstract mathematical construct. We live in a real world, where we can basically infer certain things with varying levels of confidence. This has as many effects on the argument as we have pieces of knowledge. One of the most relevant things to my mind is intent - most of us would like to survive and for the species (or something similar to it) to survive and prosper for quite a while yet. When you bring in intent, things get mightily skewed. For eg, I can use it in my own DA:

I'm going to put two dots n lines apart,
.
.
thus, starting with no view on how many lines apart I'll put them. To find my arbitrary point on the line, I'll flip a coin, each line down from the first dot. When it lands heads up, I'll begin the line with a hyphen, and explain my intent:

. (here beginneth) of the line)
- heads up first time. Here is the arbitrary point at which I find myself. My intent, now, is to defeat the odds of the DA.








If I've counted right, there's about 90% chance the second dot should have appeared by now







































































This is fun - I could keep at it all night



























































(I'm not actually counting the lines, if anyone's wondering)






























I'm labouring at a disadvantage here, thinking about it - php limits the number of characters I can use

Ah well, I'll just have to do the best with the resources I've got













































































. crashhhhhhh - end of the line

So I realise that wasn't a perfect analogy... but nothing about the DA is perfect to start with. It sweeps over a tonne of statistical vagaries.

What's more, I don't see how it can be used to inform any of our decisions - whatever decisions we make don't allow us to escape the Bayesian trap, if it exists. The argument will never apply any less because of our recognition of it...
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Evidential decision theory and the doomsday argument

Postby DanielLC on 2009-02-20T01:26:00

I don't see what that's supposed to mean. Are you saying that the fact that people are trying to survive for longer can be seen as evidence that people will live for a long time? The doomsday argument isn't supposed to be all the evidence, it's just supposed to be evidence.

I'm not sure whether or not the doomsday argument is valid.

Back to EDT. I am an Eternalist. As such, I believe that a specific thing is going to happen. If I act like the fact that it's already something specific means that I have no control over it, then I'd have no control over anything. If my choices really mean anything, something unknown must be, from my point of view, ungiven, and there must be nothing wrong with trying to control it.

To use Newcomb's Paradox as an example, it is true that whether or not the million dollar box has money in is already certain. It's chosen based on how I make my choices and thus the prediction of what I pick. However, even if Omega watched me choose and would put the money in the million dollar box only if I didn't take the thousand dollar box, it would still be based on how I make my choices. If I use an algorithm that would result in my taking only one box, he would find out based on the fact that I only take one box, and would put the million dollars in my box. It's already certain that he will put the million dollars in my box because my mind works in such a way that I will do that. That doesn't mean it's a good idea to take both boxes, though.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Evidential decision theory and the doomsday argument

Postby Brian Tomasik on 2009-02-20T01:54:00

Thanks for the comments, Arepo and DanielLC.

Arepo, the sigma stands for summation. I agree with DanielLC in not quite understanding your example. As far as "What's more, I don't see how it can be used to inform any of our decisions - whatever decisions we make don't allow us to escape the Bayesian trap, if it exists," the point is that if we're using EDT, then we can "get out of the Bayesian trap" to the extent that we find ourselves doing actions consistent with a small total reference-class size, since those indicate that we're less likely in a doom-soon scenario.

DanielLC, by your last two paragraphs, do you mean to agree with EDT? If not, what was your point?

By the way, a friend of mine replied to my post with this comment: "Read Drescher's 'Good and Real' for a decision theory superior to CDT and EDT. I don't believe that it would support the course of action in the post."
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Evidential decision theory and the doomsday argument

Postby DanielLC on 2009-02-20T03:36:00

Yes. I agree with EDT.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Evidential decision theory and the doomsday argument

Postby Arepo on 2009-02-20T17:53:00

DanielLC wrote:I don't see what that's supposed to mean. Are you saying that the fact that people are trying to survive for longer can be seen as evidence that people will live for a long time? The doomsday argument isn't supposed to be all the evidence, it's just supposed to be evidence.


I don't see how calling it 'evidence' avoids the problem. It's evidence with 0 (or at best infinitessimal) weight - if you have a ten person line-up and a reliable witness confidently picks out person #10, the police don't say 'well there's over a 90% chance that the mugger is among person #1-9, so we'd better not trust the witness'. Obviously there are plenty of evidential reasons with >0 weight not to convict on the basis of a single eyewitness - but that sort of Bayesian reasoning isn't one of them.

Alan wrote:Applied to the doomsday argument, this suggests that existential-risk reducers (I'll call them "ERRs") might engage in actions that ameliorate the improbability of their being born so early in human history. For instance, if they take the relevant reference class to be the set of all lives of conscious organisms regardless of length, then ERRs could aim to extend lifespans while reducing birth rates. If the reference class includes biological humans but not post-humans, ERRs could hasten the transition to mind uploading. If the reference class is based on numbers of conscious minds, ERRs could lobby for fewer, larger post-human brains rather than disbursed post-human brains.


To take this logic to the extreme, we could cause thermonuclear war and kill everyone on the planet. This would reduce the number of future humans in almost any reference class to ~0, thus giving our species an almost infinite expected lifespan.

Thanks for the clarification on sigma, btw - hopefully it will make my second attempt easier.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Evidential decision theory and the doomsday argument

Postby Brian Tomasik on 2009-02-20T18:18:00

To take this logic to the extreme, we could cause thermonuclear war and kill everyone on the planet. This would reduce the number of future humans in almost any reference class to ~0, thus giving our species an almost infinite expected lifespan.


That humans kill themselves in thermonuclear war would be one explanation of why we find ourselves so early in human history. But there may be others that don't require our extinction. The EDT ERR advocate's point is that we might be able to find "nonviolent" solutions to the doomsday problem.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Evidential decision theory and the doomsday argument

Postby Arepo on 2009-02-20T20:13:00

I didn't mean the example to show why we might be early in human history. I meant it as a reductio ad absurdum of the logic that by reducing the distance between future births, we extend the expected lifespan of our species.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Evidential decision theory and the doomsday argument

Postby DanielLC on 2009-02-21T02:27:00

Arepo wrote:I don't see how calling it 'evidence' avoids the problem. It's evidence with 0 (or at best infinitessimal) weight - if you have a ten person line-up and a reliable witness confidently picks out person #10, the police don't say 'well there's over a 90% chance that the mugger is among person #1-9, so we'd better not trust the witness'. Obviously there are plenty of evidential reasons with >0 weight not to convict on the basis of a single eyewitness - but that sort of Bayesian reasoning isn't one of them.


I don't see what you're saying. Suppose the witness had a 100% chance of recognizing the criminal as such, and a 1% chance of falsely recognizing a given innocent suspect. If there are ten suspects, any given one has a 1% chance of being recognized by accident, and a 10% chance because they were guilty. If they're recognized, that means there's a 10/11 chance they did it. If there are 100 suspects, it becomes a 50% chance that they did it. If there are 1000 suspects, it's a 1/11 chance.

As far as I can tell, there are only two handwaves in the DA. One is that if there are a given number of lives, you are equally likely to be any of them. Other than the fact that they have different lengths, I see no problem there. The other is the a priori probability distribution. If you do it based on each lifespan having the same probability, the DA would actually say that the total number of people being lower than any finite number would be infinitesimal. I think what they are using is a sort of logarithmic probability distribution where 1 person is the same probability as 2-3 which is the same as 4-7 etc. . In other words, the probability of there being a given number of people is inversely proportional to the number of people. It may seem like there's no reason to assume this, however, even if you assume it, there's an infinitesimal chance of the total number of people being smaller than any finite number, and thus any evidence you have showing that there are less than a given number of people (such as that you really are one of the first trillion people and haven't just been hallucinating) isn't strong enough. If you believe that there is a finite number of people, you'd need an a priori probability distribution where the probability decreases with the total number of people even faster (only slightly faster, though).

But unless you figure out some sort of investment that won't pay off for a thousand years, none of this really matters. I guess you could use it to say global warming won't be a problem.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Evidential decision theory and the doomsday argument

Postby Arepo on 2009-02-22T13:56:00

Argh, managed to lose my reply to the forum's logout :( I'll repost (a probably shorter version) when I've recovered from the shock...
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Evidential decision theory and the doomsday argument

Postby Brian Tomasik on 2009-02-24T01:25:00

I wrote to Nick Bostrom with a link to this post, and he replied as follows:

I don't think that the DA has no practical implication other than generally trying to prevent ERs even if one uses a causal decision theory. DA would shift probability away from hypothesis on which there are many future observer moments in your reference class. Conditional on DA being correct for some reference class X you should therefore concentrate your efforts on activities that will pay off well if there aren't many future entities in X. For example, a global mind would only count as one in the reference class then, I think, you may want to work harder (on the margin) to create a global mind, because conditional on there not being lots of people in the future, having one big global mind might be the best one could realistically hope for (this example is for illustrative purposes only...)

I also realized recently that it is not clear that the DA if valid increases one's reasons to be concerned with existential risks - it could go either way. If you started off assigning existential risk a negligible probability then DA would increase that probability perhaps to a level where ER becomes a significant concern, so it could increase one's reasons. But if you start off thinking that ER is a significant concern, then if DA is valid it would tend to show that there is less to lose from extinction than you might previously have thought - DA showing that there is no significant chance of there being trillions and trillions of people in the future (in your reference class).
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Evidential decision theory and the doomsday argument

Postby DanielLC on 2009-02-25T00:25:00

Come to think of it, the DA commonly only counts humans, when it should be including all sentient animals, though it might require weighting for more sentience. That's a lot more optimistic, although it would mean that people probably won't be colonizing other solar systems.

I suppose you could use a version of the DA to say that animals don't count. If you have to be above a certain level of intelligence, it's more likely that you're a human. Also, it could be that you really would weight it for sentience. You could think of it as less intelligent animals living slower. Or it could be a coincidence that I happen to be a human.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Evidential decision theory and the doomsday argument

Postby Brian Tomasik on 2009-02-25T04:58:00

DanielLC makes some good points. I often find it odd that I'm a human despite the abundance of other animals, even self-aware ones. Maybe there are more human minds than we think -- for instance, perhaps because we're ancestor simulations, and post-humans simulate their human ancestors more than their primate or pre-primate ones.

Or maybe animals don't belong in our reference class, because they presumably can't comprehend anthropic arguments. (On that account, many humans wouldn't either.) This raises the question of how exactly we define our reference class, which is highly important but also highly fuzzy.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Evidential decision theory and the doomsday argument

Postby DanielLC on 2009-02-25T22:13:00

Your reference class is what it's possible to be. If you have some chance of being a non-human animal, you are less likely to be a human, regardless of whether or not they're able to comprehend the idea.

You shouldn't be surprised about something unlikely happening. No matter what happens, it's unlikely. If how unlikely it is is decided by outside circumstances, than ones where what happens would be dramatically more likely would be more likely. (I really need to learn to write clearer).

Even if we're being simulated, aren't the animals being simulated too? Is the idea that they only actually bother computing everything when people are around, so all the animals only actually exist when people are nearby? I suppose if they only simulate when there are people, it would still be much more likely to be a person.

By the way, this seems to be related to the Sleeping Beauty problem.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Evidential decision theory and the doomsday argument

Postby Brian Tomasik on 2009-02-26T05:38:00

If you have some chance of being a non-human animal, you are less likely to be a human, regardless of whether or not they're able to comprehend the idea.


I'm not sure -- the fact that you can comprehend the idea is information about who you are and might therefore restrict your reference class. Bostrom discusses reasons why it may be legitimate to choose a more restricted reference class than the universal one in ch. 10 of his Anthropic Bias. He adds (p. 182), "My suspicion is that at the end of the day there will remain a subjective factor in the choice of reference class."

Even if we're being simulated, aren't the animals being simulated too? Is the idea that they only actually bother computing everything when people are around, so all the animals only actually exist when people are nearby?


Yes, I was thinking something like that.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Evidential decision theory and the doomsday argument

Postby Arepo on 2009-03-08T22:48:00

Arepo wrote:Argh, managed to lose my reply to the forum's logout :( I'll repost (a probably shorter version) when I've recovered from the shock...


Right, take two. I enlisted the help of a couple of mathematician friends to check that I wasn't talking complete nonsense here. One of them pointed out an error DanielLC seems to have made in the line-up discussion (though he was keen to point out that the claims there in weren't expressed in precise enough language to be sure that he'd interpreted them right. Any mistakes in what follows are certainly mine, introduced in translation). I don't think it's a very important error, but having gone to all that effort I'm damn well going to point it out :P

So we took this paragraph:

I don't see what you're saying. Suppose the witness had a 100% chance of recognizing (R) the criminal (G) as such, and a 1% chance of falsely recognizing (R) a given innocent suspect (NG). If there are ten suspects, any given one has a 1% chance of being recognized by accident, and a 10% chance because they were guilty. If they're recognized, that means there's a 10/11 chance they did it.


And thought you meant P(G | R) = 10/11.

But if you calculate P(G & R) /(P(G & R) + P(NG & R)) then you have (0.1*1)/((0.1*1) + 0.9*0.01)

So then P(G | R) = 100/109.

Also, this only applies if the witness identifies the first suspect he looks at - with each one he passes over the odds of identifying the wrong one will decrease.



In any case, having said all that, I now want to claim it's irrelevant. There's just no analogy between the two cases. In the witness case we have several empirical data:

1) P(R | G) = 1
2) P(R | NG) = 0.01
3) Number of suspects = 10
4) Number of guilty suspects = 1
5) The witness identifies the first suspect he examines (at least in the example above, though obviously you can recalculate if he doesn't)
5) The witness chose who to examine first at random

(and perhaps others that haven't occured to me)

All you've done is use a bit of probabilistic reasoning to put these data together and see what they spit out. The DA is something else altogether. It use no data to show that certain probabilities hold if you have no data. But if you do have data, you don't mix it with the original dataless result to get some sort of mean, you just see what probability it gives you and discard the dataless result. If you have data that the sun is 99% likely to go supernova tomorrow, then you can't use a reverse DA to show that we're actually more than 1% likely to survive tomorrow - we just are 99% likely to all die (perhaps slightly more, given the possibility of us wiping ourselves out beforehand, which surely outweighs the probability of us suddenly developing and implementing faster than light travel).

Hence my reductio ad absurdum of Alan's original suggestion that reducing numbers of future people gives our species a longer expected lifespan. If that were true, then our sun going supernova tomorrow and reducing our numbers to 0 (with probability 1) would give our species the longest possible expected lifespan.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Evidential decision theory and the doomsday argument

Postby DanielLC on 2009-03-09T16:21:00

You can't show that the sun is 99% likely to go supernova tomorrow. You can get data that you'd be 99% likely to get if the sun were to go supernova tomorrow, and 1% likely to get if it wasn't. If you already knew there was a 50% chance of the sun going supernova tomorrow, then you could conclude that, given the new information, there is a 99% chance of the sun going supernova tomorrow.

The following isn't related to DA:

When I said that given ten suspects, any one would have a 1% chance of being recognized by accident, it should have been 0.9% chance. I forgot to account for the fact that there can't be a false positive unless they're innocent. This would give the correct result.

Also, the situation is more likely than it seems. If you have some sort of DNA test that has a false positive rate so low that only nine people in the area are likely to get false positives, you effectively have ten suspects, but it could take a while to find even one of them.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Evidential decision theory and the doomsday argument

Postby Brian Tomasik on 2009-03-09T17:27:00

But if you do have data, you don't mix it with the original dataless result to get some sort of mean


Well, actually, that's precisely what you do if you're a Bayesian: Combine prior with likelihood to get a posterior.

"Using just the data" is sort of what frequentists do, although I hesitate to use that phrase, because it makes the frequentist position sound better than it is; really all frequentists are doing is forcing themselves to use a particular implicit prior, which may not be the best one.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Evidential decision theory and the doomsday argument

Postby Arepo on 2009-04-18T16:58:00

I think the problem here is that no-one's actually said what they think the DA is, only drawn analogies to it. Can one you you spell out mathematically the version you're accepting?
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Evidential decision theory and the doomsday argument

Postby DanielLC on 2009-04-18T18:57:00

There is an a priori probability, p(n), that the universe will contain a total of n people. If p(n) is proportional to 1/n or anything higher, it ends up zero for any finite value n. If we accept that there might only be a finite number of people in the universe, it must be lower than that. We'll use it as an upper bound. It is found out that you are the mth. This has a probability of 1/n if m<=n and zero otherwise. The probability of the universe containing n people is p(n|you are the mth). That is to say, it's the probability that there are n people, given that you are the mth. That would be p(n)*p(you are the mth)/the infinite sum from k = 1 to infinity of p(k)*p(you are the mth). Plugging everything in, you get 1/n*1/n/the infinite sum from k = m to infinity of 1/n*1/n. Simplifying, you get 1/(n^2 * sum k = m to infinity 1/k^2), which is about 1/(n^2 * integral m to infinity 1/x^2 dx) = 1/(n^2 * 1/2 * 1/m) = 2m/n^2 (naturally, assuming n >= m).

How's that?

It's actually about the weighted mean of populations for planets (we only know that we're the mth Earthling), which would make it rather more complicated, but you'd get about the same result.

Alan Dawrst wrote:
But if you do have data, you don't mix it with the original dataless result to get some sort of mean


Well, actually, that's precisely what you do if you're a Bayesian: Combine prior with likelihood to get a posterior.


That can't be right. Say that there is a suspect for a crime that has DNA evidence against him where nine people in the area will statistically get a false positive. That means that there's a 90% chance that he's innocent. Say that it's also known that he was one of 25 people who were at the scene of the crime at the right time. That would indicate a 96% chance of innocence. If you take the mean, you get a 93% chance of innocence, when really, he's almost definitely guilty.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Evidential decision theory and the doomsday argument

Postby Brian Tomasik on 2009-04-18T20:09:00

If you take the mean, you get a 93% chance of innocence, when really, he's almost definitely guilty.


Sorry -- I interpreted "some sort of mean" loosely. I was thinking of the process of multiplying one's prior probability distribution by the likelihood and then re-normalizing to get a posterior distribution. This is a sort of "averaging" operation, though certainly not of the type in your example.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Evidential decision theory and the doomsday argument

Postby DanielLC on 2009-11-13T05:40:00

Arepo, I finally got your post, maybe. You assumed that the probability of being the nth of m people is 2^-n, rather than 1 in n, as is generally assumed. What a priori reason do you have to think you are one of the first ones? Every line represents someone, and any of them are just as likely to be you.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm


Return to General discussion