Newcomb's Problem

Read the thread.


  • Total voters
    212
I know I said that if the alien is infallible (whatever that entails) then the question is pointless as the answer is obvious, and I think a fair few people did. It's simply creating a proxy to make the same point I think. You don't need retro causality or to breach it, as common or garden probability will yield the same result, and with the same reliability.
 
Isn't this just a restatement of the fact that Omega's predictions are always correct?

Also, isn't this kind of thinking a self-fulfilling prophecy?

I.e. it's ALWAYS preferable to pick Box B. Therefore I predict that you will pick Box B. I don't need any kind of advanced simulated Perfy to predict that -- it's just the only logical conclusion!

No, because it's actually always preferable to take both boxes, so you need some sort of mechanism to ensure the money is in box b. That mechanism is the duality of Perfs.
 
If you believe that your choice will influence the amount of money put in the boxes, choose box B. [...]

Correction: If you believe that your choice bears a determinative relation to the amount of money put in the boxes, choose box B. Which it probably does.

Edit: see this reference about time-symmetric determinism
 
No, because it's actually always preferable to take both boxes, so you need some sort of mechanism to ensure the money is in box b. That mechanism is the duality of Perfs.

Yeah... so any rational person will choose only Box B...

Omega correctly predicts that you will choose Box B, because it is the only logical conclusion!
 
I could probably pretend to be Omega just by giving everyone the box values they named here. It's not that hard to get 100/100 right by approaching the right people. ;)
 
Would it change opinion of the people who choose "both boxes" if the alien has been playing this game for ages and allready predicted millions of outcomes succesfully and never has been wrong once?

Or is the amount of times he predicted accuratly irrelevant to you?
 
Isn't this just a restatement of the fact that Omega's predictions are always correct?

Also, isn't this kind of thinking a self-fulfilling prophecy?

I.e. it's ALWAYS preferable to pick Box B. Therefore I predict that you will pick Box B. I don't need any kind of advanced simulated Perfy to predict that -- it's just the only logical conclusion!

Precisely!

"A&B is always best" is a contradiction, because Omega will know you'll always pick A&B and thus B will be empty => picking just B best, which contradicts the original statement.
 
I don't think i was a quite complicated point the one i made.


The choice of either gaining 100$ or trying my luck on an extremely hard to win lotterry is the same as buying an $100 ticket by the money i just received from a donation. And that is not a good investment. I would only participate in a lottery which gave me good odds at winning and a low cost at participating. I think that is the most logical way for one to act.

i also mentioned what if , i had the choice of buying multiple tickets. And as by buying one ticket i have 0.1 chance of winning. If i buy all 1000 ones i would win , no matter what. That would cost me $100000 and as a result i would make an $900000 profit as the prize money is $1000000.

Ah, I think I understand now. Indeed, it is irrelevant that you're losing or you're gaining the cost. It is indeed more accurate to say "highest expected value" rather than a positive expected value.

And for the purposes of comparison, think of it as a game instead of a lottery. I.e. you can only buy tickets in parallel, but not in series (i.e. 6 separate lotteries rather than 6 tickets for one lottery)... in fact that's why lotteries in real life have negative expected values.

I know I said that if the alien is infallible (whatever that entails) then the question is pointless as the answer is obvious, and I think a fair few people did. It's simply creating a proxy to make the same point I think. You don't need retro causality or to breach it, as common or garden probability will yield the same result, and with the same reliability.

Even if the alien isn't infallible, with an understanding of statistics, the answer is still obvious! We don't know that the alien is infallible, but we know it has been right 100 out of 100 times. That allows us to place some sort of certainty value to its accuracy (like 99%) and derive expected values.

Would it change opinion of the people who choose "both boxes" if the alien has been playing this game for ages and allready predicted millions of outcomes succesfully and never has been wrong once?

Or is the amount of times he predicted accuratly irrelevant to you?

It is apparent that the number of times predicted is more or less irrelevant to them. They are resting their hypotheses on the assumption that Omega has nearly no predicting abilities (with a greater than 50.05% predictive ability, the statistical analysis clearly shows that choosing only one box is better). Given the 100/100 data, this likelihood is extremely low. I doubt that they are analyzing it and saying "well, I'll assume that he has no predictive abilities on a 0.0001% chance, but I wouldn't assume that on a 0.000000001% chance". So an infallible alien should have no effect on the decision (given the amount of data in the 100/100).
 
Even if the alien isn't infallible, with an understanding of statistics, the answer is still obvious! We don't know that the alien is infallible, but we know it has been right 100 out of 100 times. That allows us to place some sort of certainty value to its accuracy (like 99%) and derive expected values.

Precisely. I think the idea that we need some extraordinary means to make a choice is not true.
 
This thread has been fun :D

I still have to wonder though, how two-boxers can follow the two-box rationale even though it doesn't fit with the data given in the question. I mean, going by the two-box logic, of the people who two-box, 50% should be rich, and yet, they aren't. The rationale doesn't fit the data, hence there is something wrong with the rationale! :scan:

Because after all, if 100/100 observed two boxers get a thousand dollars, and 100/100 observed one boxers get a million dollars, how many boxes do you choose to take? There are no two-boxers who got $1,001,000, and no one-boxers who got $0, hence we have to assume that these two outcomes are either impossible or extremely unlikely!

Cheers guys
 
Actually, the problem is that a full statistical analysis of the possibilities yields better results for choosing one box.

Logic > Statistical Analysis

The statistics in this thought experiment are meant to deceive you.

Mise said:
I.e. it's ALWAYS preferable to pick Box B. Therefore I predict that you will pick Box B. I don't need any kind of advanced simulated Perfy to predict that -- it's just the only logical conclusion!

If it's always preferable to pick box b, then you might as well pick both boxes. That way you end up with the preferable box B AND another box on top.

Truronian said:
"A&B is always best" is a contradiction, because Omega will know you'll always pick A&B and thus B will be empty => picking just B best, which contradicts the original statement.

If Omega doesn't put any money in box B, then picking box B is not going to change the fact that there ain't any money in it.
 
To be a one-boxer, you pretty much accept that for all intents and purposes, Omega knows which choice you are going to make before you make it. :)
 
Logic > Statistical Analysis

The statistical analysis stems from logic. All statistics and mathematical principles are directly attained using an axiom or two, and a whole lot of logic.

The logic shows that since the chances of his prediction being wrong (i.e. that you'll get $1001000 if you choose both boxes) are extremely low, you should not choose such a course of action.

The statistics in this thought experiment are meant to deceive you.

Fact of the matter is that: if you take one million one-boxers, they'll likely have 1000000*$990000 or more amongst them all, and if you take one million two-boxers, they'll likely have 1000000*$11000 or less amongst them all (and by likely, I mean "almost certainly"). So would you like to be in the population of one-boxers or two-boxers?

If it's always preferable to pick box b, then you might as well pick both boxes. That way you end up with the preferable box B AND another box on top.

Obviously he meant it's always preferable to pick only box B.

If Omega doesn't put any money in box B, then picking box B is not going to change the fact that there ain't any money in it.

Lemme try three ways to explain it to you...

1) You've got it the wrong way around. If Omega doesn't put any money in Box B, then you are not going to pick Box B.

2) If it helps, think of it like this. Picking Box B will indeed magically spawn $1000000 in it! It's true! It's just that this money will magically spawn in the past, when Omega was predicting your choice.

picking box B is not going to change the fact that there ain't any money in it.

3) Yes it will. Picking Box B means that there was actually $1000000 in it, and Omega didn't leave it empty.
 
To be a one-boxer, you pretty much accept that for all intents and purposes, Omega knows which choice you are going to make before you make it. :)

Which would put serious doubts on the idea of free will.

I would just assume that he's cheating, and go with one box.
 
What, pray tell, would be "cheating"? We don't know how he makes his predictions, we only know the results of his past 100 predictions. For all we know, he could be using cloning, mind-reading, time-traveling, or just plain intuition and luck. So what exactly would be cheating?
 
It doesn't nessesarily rule out free will if we postulate Perfy's simulation hypothesis!

So yeah, pretty fun scenario, but one-boxing is the best choice. :)
 
The logic shows that since the chances of his prediction being wrong (i.e. that you'll get $1001000 if you choose both boxes) are extremely low, you should not choose such a course of action.

There is nothing to suggest that the statistical data presented to you has not been fudged with in order to present to you a seemingly paradoxical situation.

Defiant47 said:
Fact of the matter is that: if you take one million one-boxers, they'll likely have 1000000*$990000 or more amongst them all, and if you take one million two-boxers, they'll likely have 1000000*$11000 or less amongst them all (and by likely, I mean "almost certainly"). So would you like to be in the population of one-boxers or two-boxers?

The fact is that as soon as the boxes are presented to me, and I am asked to make my decision, my decision is not going to affect the contents of the boxes.

At that point in time, previous statistics will not alter the contents of the box. If I select box B alone and I end up with a million dollars, I might as well have gone with both boxes - I would have ended up with more money. Or are you trying to say that the money would have mysteriously vanished from the box had I made a different decision?

Or maybe that I would have been unable to make that decision in the first place?

1) You've got it the wrong way around. If Omega doesn't put any money in Box B, then you are not going to pick Box B.

So I don't even have a say in the matter; the alien makes the decision for me. Whatever he decides, I will do.

I have no free will, and thus am not even making a decision. This sort of highlights one of the flaws with this thought experiment. It assumes that you have no free will OR that the alien is cheating (beaming money into the box or looking into the future)
 
There is nothing to suggest that the statistical data presented to you has not been fudged with

That is true. We are assuming that the information regarding the 100/100 data is correct. But I have a feeling that this question is meant to be asked as "assuming that the information is true, what do you do?", so that is more or less moot.

in order to present to you a seemingly paradoxical situation.

The situation is not paradoxical. Data is available to make the desired choice rather obvious.
 
The fact is that as soon as the boxes are presented to me, and I am asked to make my decision, my decision is not going to affect the contents of the boxes.

That is because the contents of the boxes are already determined based on your future choice.

At that point in time, previous statistics will not alter the contents of the box. If I select box B alone and I end up with a million dollars, I might as well have gone with both boxes - I would have ended up with more money. Or are you trying to say that the money would have mysteriously vanished from the box had I made a different decision?

Or maybe that I would have been unable to make that decision in the first place?

That is more or less on the right track. If you select Box B, then it would have been better to choose both boxes to get the extra $1000. However, if you would have chosen both boxes, then Box B would have nothing (because your decision would be correctly predicted). If you choose both boxes, the money was never in Box B to begin with.
 
So I don't even have a say in the matter; the alien makes the decision for me. Whatever he decides, I will do.

I have no free will, and thus am not even making a decision. This sort of highlights one of the flaws with this thought experiment. It assumes that you have no free will OR that the alien is cheating (beaming money into the box or looking into the future)

Actually, no, whatever you decide, he predicts.

Let's suppose that we're good friends, and you tell me that in a rock-paper-scissors game, you always choose rock first (and you pretty much do), at least for the first game. A week later, we have a rock-paper-scissors game. Does my prediction that you'll choose rock first deprive you of your free will?

Suppose that I make a complete psychological profile on my mother. I analyze her such that I can say with a lot of certainty that if I told her "I want to kill myself", she would become worried. I then go and tell her "I want to kill myself". Does my prediction that she would become worried deprive her of her free will?
 
Back
Top Bottom