Newcomb's Problem

Read the thread.


  • Total voters
    212
1) You've got it the wrong way around. If Omega doesn't put any money in Box B, then you are not going to pick Box B.

Let's get back to this point. Assume that this is true... which seems to be a decent enough assumption, given that the statistical data presented to us is true.

If this is true, then asking "Which box would you pick?" does not make sense.. Because you are unable to make a selection, by your own admission.

When presented with the boxes, you have no choice. Thus the question does not make sense.

A more appropriate question would be: "What would the alien pick for you? Both boxes or just one?" But then that's not as fun, now is it?

Actually, no, whatever you decide, he predicts.

You just said that I wouldn't be able to pick both boxes if he put money into box B. I have no decision making capabilities at the point that matters: when the boxes are presented to me.
 
The question doesn't require that the box picker be free - all the question asks is which box or boxes will get the picker rich!
 
Let's get back to this point. Assume that this is true... which seems to be a decent enough assumption, given that the statistical data presented to us is true.

If this is true, then asking "Which box would you pick?" does not make sense.. Because you are unable to make a selection, by your own admission.

When presented with the boxes, you have no choice. Thus the question does not make sense.

Let us assume that you are correct. For the purposes of showing my point, let's say that you play this game twice.

Assumption: You have no choice in the matter.

Analysis: You are presented from two choices and you can choose from either one of them. In fact, you can choose Box B one time, and both boxes the second time. Therefore, we have the capacity to choose either, and indeed we can choose both.

This is a contradiction to your assumption. Therefore, the opposite is true: we do have a choice.

***

You do have a choice, it's just that your choice was predicted by the alien. Suppose you are a one-boxer. He has placed the $1000000 into Box B. You are theoretically capable of choosing both boxes... the issue is that you don't.

It may seem like you don't have a choice, but the only thing that gives you free will is the fact that you don't know whether the box has the money or not.

A more appropriate question would be: "What would the alien pick for you? Both boxes or just one?" But then that's not as fun, now is it?

The alien doesn't pick for you. You pick for yourself. Whatever choice you do end up with, the alien has retroactively put or removed the money in the box.

You just said that I wouldn't be able to pick both boxes if he put money into box B. I have no decision making capabilities at the point that matters: when the boxes are presented to me.

It's not that you wouldn't be able to pick both boxes... it's that you simply wouldn't. You decide, but what you decide has already been predicted accurately; that doesn't take away the fact that you still make the decision for yourself. Suppose that I present you with a multiple choice choice problem with answers A, B, and C. A full psychological profile on you shows to me that you'll pick C. Does my knowledge that you'll pick C effectively remove your decision-making abilities?
 
Yeah, but say that I have two boxes in front of me. I do not know this, but there is a million dollars in box B.

Do I have the option of picking both boxes? The thought experiment seems to indicate a clear no.

You are the slave of your own reasoning and brain functions. They lead you to the conclusion that you should pick only Box B. And you do.

You have the option. You do not have the mental composition for it to be possible for you to reach the conclusion to choose both boxes.
 
If it's always preferable to pick box b, then you might as well pick both boxes. That way you end up with the preferable box B AND another box on top.

No, because picking both boxes means that Box B is empty.

I should have said picking ONLY box B is preferable.
 
Isn't this just a restatement of the fact that Omega's predictions are always correct?
No, it's an explination of why Omega's predictions are always correct, that brings about crucial conclusion, you don't know if you are the real you or the you simulated by predictor.


I.e. it's ALWAYS preferable to pick Box B. Therefore I predict that you will pick Box B. I don't need any kind of advanced simulated Perfy to predict that -- it's just the only logical conclusion!
Who says that everyone would always pick the logical solution?
 
No, it's an explination of why Omega's predictions are always correct, that brings about crucial conclusion, you don't know if you are the real you or the you simulated by predictor.
But if someone accepts that Omega's predictions are ALWAYS correct, then they should accept that picking only Box B is the best outcome, no?

Or, to put it another way, if you don't accept that Omega's predictions are always correct, then there's no reason to accept that he has perfect simulations of you, so there's no reason to accept the game-theoretic conclusion that you should co-operate with the simulations.

It's a great way of thinking about it though, and it convinced Gogf, so it's all good.

Who says that everyone would always pick the logical solution?
Well, this whole thread is about which solution is most logical. If the question was "which would you choose if you were bathorsehocky insane?" then it's probably a bad idea to assume that everyone would pick the logical choice. But when people are asked "which would you choose", they tend to take it as "which is the most rational choice".
 
If it's always preferable to pick box b, then you might as well pick both boxes. That way you end up with the preferable box B AND another box on top.

Which is why it can't be preferable to always pick box B

If Omega doesn't put any money in box B, then picking box B is not going to change the fact that there ain't any money in it.

This reasoning occurs before any money goes anywhere.

As far as I can see the problem is neither a case of always A&B not of always B which is why the statistics are significant.
 
Sorry for the confusion. This is what I meant:

The claim "Omega has been right 100 times out of 100 times. Therefore, he has had 100% success rate in the past." does not help us analyze the future at all; only the past. It would only be useful if we were to go back into the past and make decisions there(then).

The claim "Omega has been right 100 times out of 100 times. Using a statistical analysis based on the data, this shows that his success rate is most likely 99% or greater." actually helps us make predictions about the future; about what his success rate will continue to be.

Not really. Think of it this way: I flip a coin 10,000 times. Each time before I flip it, I predict what the result will be. Miraculously (and purely by luck), I am correct EVERY SINGLE TIME. I have had a 100% success rate thus far. Next, I predict the next coin toss and write down my prediction. I then ask you if you think my prediction will be accurate. If you pick right, I will give you $100. Does the expected value of me being correct ($100 minus a tiny fraction) mean that you should always pick that, despite the fact that there is in fact a 50-50 chance for each choice?

Thus, it's irrelevant that he's had 100% success rate in the past, since that's not the number we need. Rather, it relevant that he will likely have 99% or greater success rate in the future, since that's the number we use to analyze the situation (especially for creating expected values).

How is it likely that he will have a greater than 99% success rate in the future? If you include the past success rate, then of course that's true (assuming that there have been 99 or more "games" before), but your statement becomes meaningless and tells us nothing if you include the old success rate. In other words: how do you know that, considering only all future "games," that the future success rate will likely be 99% or greater?

That is true (other than the second part which I don't really get): a choice's value is equal to its expected value. A 10% chance of gaining $10 has a value of $1 for analytical purposes.

Would you spend 1$ for a 99% chance of gaining $100? What about spending $10? $20? $50? $99? $99.50?

Every cost up to $99 given you a positive expected value, making you on average wealthier. Every cost beyond $99 gives you a negative expected value, and the opposite is true (thus it's a bad idea).

This is simply not true. If we were trading comidities or running this experiment a large number of times, then yes, the value of each choice would be equal to its expected value. However, that is not what we are doing here... our goal is to pick the better option, not try to maximize our chances as if whether or not the money was in box b was randomly determined.
 
If it's always preferable to pick box b, then you might as well pick both boxes. That way you end up with the preferable box B AND another box on top.

True, but the only way to ensure that there is $1,000,000 in box b is to pick only box b, because of the duality of Perfs. Once that money is in there, then yes, it would be preferable to take both, but "you don't know who you are," so the only way you can win yourself the $1,000,000 is to take only box b. If this doesn't make sense PLEASE READ PERF'S POST.
 
Not really. Think of it this way: I flip a coin 10,000 times. Each time before I flip it, I predict what the result will be. Miraculously (and purely by luck), I am correct EVERY SINGLE TIME. I have had a 100% success rate thus far. Next, I predict the next coin toss and write down my prediction. I then ask you if you think my prediction will be accurate. If you pick right, I will give you $100. Does the expected value of me being correct ($100 minus a tiny fraction) mean that you should always pick that, despite the fact that there is in fact a 50-50 chance for each choice?

If I didn't know that you're actually flipping a coin to make your choice, then yes. This is because the actual chances are 50%, and for you to have gotten it correct by pure chance would be a 1/2^100 chance, a very small number. Given the fact that you've been correct 100 out of 100 predictions on a 50/50 probability suggests that you likely have a 99% or greater success rate in your predictions.

How is it likely that he will have a greater than 99% success rate in the future? If you include the past success rate, then of course that's true (assuming that there have been 99 or more "games" before), but your statement becomes meaningless and tells us nothing if you include the old success rate. In other words: how do you know that, considering only all future "games," that the future success rate will likely be 99% or greater?

This is because he has been successful 100 out of 100 times whereas a statistical average would yield him 50 out of 100 successful predictions if he had no predictive abilities whatsoever.

This is simply not true. If we were trading comidities or running this experiment a large number of times, then yes, the value of each choice would be equal to its expected value. However, that is not what we are doing here... our goal is to pick the better option, not try to maximize our chances as if whether or not the money was in box b was randomly determined.

How about we evaluate the value of each choice then? The choice with the highest value wins!

And the money in Box B isn't randomly determined. It is determined by your choice (run through a probability modifier).
 
Well, like, the way I see it is umm...

The "decision" that you like, make is just based on the signals firing about in your brain which will fire a certain way no matter what. Omega has an omega cool simulator of some sort and simulates your non-random behavior. In fact, you have already made the decision because you have no free will over your decision. (I find it hard to believe that free will can exist)

So if your brain's electric electricity like, fires in a way to make you take one box, Omega omega knows that, and totally fixes it like that. It's like how if you shoot a ball into the air, you can predict with the laws of physics how long it'll stay in the air. It's like that except with Omega and his omega awesome physics labs.

Ne?
 
I think I found the fallacy of the two boxers:

You don't know who you are!

If I were to to comone across Omega I would note that Omega has demonstrated a profound accuracy rate. I would presume that he has a sort of simulated version of me. This simulation would be just as capable of making choices as me, taking into the same things into account as me. In a lot of ways, the simulation is me. In fact I can't be sure that I am not the simulated version of myself in Omega's head!

So now I have a choice to make: B or both. If I'm the real Perfection, I should take both. If I am SimPerfs (and I so desire to be good to real Perfection) I should take B. But under the condition that I don't know if I am the real Perfection or SimPerfs which do I go for?

I might say, "well only the Perfy I am really matters, screw the other Perfies, if I am the real Perfection I get more money this way and if I'm SimPerfs I'm probably going to dissapear from Omega's mind anways so in this case I come out ahead" That would be a mistake though, because all the other Perfies would do the same to me!

The solution is to adopt this reciprical pact: "always be good to my alternate selves so that they may always be good to me!", and the result is clearly pick box B.
If you're a perfect-for-all-practical-purposes simulation, you presumably will care about yourself just as much as the real Perfection cares about himself, and therefore the assumption that the SimPerfs desire to be good to the Real Perf makes no sense. With Newcomb's Problem, the goal presumably is to maximize one's own reward (if we allowed for benevolent concerns, then hell, even with the original problem, we could say it's better to only get $1000 so that Omega has more money), so A&B is the right choice even if you're a simulation (assuming no reverse-causality). Granted, you say that picking just B need not be altruistic, since if you misbehave the other Perfs might retaliate (at least that's how I interpret your last few sentences), but that only makes sense if this game is iterated more than once.

So it still seems like the validity of picking only Box B depends on reverse-causality.
Correction: If you believe that your choice bears a determinative relation to the amount of money put in the boxes, choose box B. Which it probably does.

Edit: see this reference about time-symmetric determinism
Can you elaborate? What exactly is a "determinative relation," besides cause and effect? SpockFederation's thoughts are my thoughts too: A&B is the logical choice assuming common-sensical (but not necessarily true) physics, and just B is the logical choice if our choice causes Omega's prediction (a future event causing a past event, strange but maybe true). What other possible "determinative relation" would induce us to choose just B?

(I skimmed the article you linked to --- by the way, I think your link has a typo; the correct one is http://plato.stanford.edu/entries/determinism-causal/ --- and I didn't see anything elaborating your particular point, although that may be due to overly lazy skimming.)
 
If I didn't know that you're actually flipping a coin to make your choice, then yes. This is because the actual chances are 50%, and for you to have gotten it correct by pure chance would be a 1/2^100 chance, a very small number. Given the fact that you've been correct 100 out of 100 predictions on a 50/50 probability suggests that you likely have a 99% or greater success rate in your predictions.

Right, so the data suggests that I am good at predicting, and that should in some way factor in your analysis. Fine. However, you have yet to explain what justification you have for the mathematics you are applying to this.

This is because he has been successful 100 out of 100 times whereas a statistical average would yield him 50 out of 100 successful predictions if he had no predictive abilities whatsoever.

So, basically what you're saying is "the data suggests that he is good at predicting which choice you will make"?

How about we evaluate the value of each choice then? The choice with the highest value wins!

And the money in Box B isn't randomly determined. It is determined by your choice (run through a probability modifier).

What the hell does "run through a probability modifier" mean? I have a sense though, and it gives me enough idea to tell you that you are completely wrong. There is no probability involved in determining whether the money is in the box or not. Period.
 
But if someone accepts that Omega's predictions are ALWAYS correct, then they should accept that picking only Box B is the best outcome, no?

Or, to put it another way, if you don't accept that Omega's predictions are always correct, then there's no reason to accept that he has perfect simulations of you, so there's no reason to accept the game-theoretic conclusion that you should co-operate with the simulations.
Well, there are other possibilities, but they generally rely on backwards causality where your choice causes something to happen in the past. My explination explain why one box is is the best choice without resorting to backwards causality.

Well, this whole thread is about which solution is most logical. If the question was "which would you choose if you were bathorsehocky insane?" then it's probably a bad idea to assume that everyone would pick the logical choice. But when people are asked "which would you choose", they tend to take it as "which is the most rational choice".
Right, but people disagree on which is the most rational choice as this thread clearly shows. So what their choice is in the situation cannot be given.
 
If you're a perfect-for-all-practical-purposes simulation, you presumably will care about yourself just as much as the real Perfection cares about himself, and therefore the assumption that the SimPerfs desire to be good to the Real Perf makes no sense. With Newcomb's Problem, the goal presumably is to maximize one's own reward (if we allowed for benevolent concerns, then hell, even with the original problem, we could say it's better to only get $1000 so that Omega has more money), so A&B is the right choice even if you're a simulation (assuming no reverse-causality). Granted, you say that picking just B need not be altruistic, since if you misbehave the other Perfs might retaliate (at least that's how I interpret your last few sentences), but that only makes sense if this game is iterated more than once.

So it still seems like the validity of picking only Box B depends on reverse-causality.

NO! The duality of Perfs is NOT the prisoner's dilemma! These decisions are simultaneous, based on the rules that Perf has set out. If the rule is "screw other Perfs," then all Perfs get screwed by the infinite regress of imaginary Perfs. However, if the rule is "be kind to other Perfs," then all Perfs benefit.

The assumption isn't that imaginary Perf is altruistic... the assumption is that all Perfs (real or imaginary) will get more money if all Perfs (real or imaginary) pick box b. Therefore, by being altruistic to other Perfs, each Perf will benefit. In other words, it's not the individual choice of b that causes the money to be in that box, it's the rule (be altruistic unto thyself) that does! By being altruistic to other versions of himself, Perf can guarantee that there is $1,000,000 in box b—something that cannot be done if you take both boxes.

EDIT: What's interesting here is that people who don't have that rule but still pick box b won't necessarily get the money!
 
Right, so the data suggests that I am good at predicting, and that should in some way factor in your analysis. Fine. However, you have yet to explain what justification you have for the mathematics you are applying to this.

I can't explain it. I'm trying to look into it. I'm sure that there's some probability function that provides a probability density function of the probability given the number of successes and the chance of a prediction-less success. (and by trying, I mean I'm not actually doing anything, since I have better things to do, but I might ask one of my friends)

So, basically what you're saying is "the data suggests that he is good at predicting which choice you will make"?

Yes. I am also saying that by some calculation that I have not done and am not sure of the accuracy, the data suggests that he is likely more than 99% correct in his predictions.

Regardless, I do have the calculations showing me that to pick both boxes, I would have to believe that his prediction rate is less than 50.05%, and I'm pretty sure that the calculation mentioned above would give him a much higher prediction rate.

What the hell does "run through a probability modifier" mean?

That means that your choice determines what's in the box 99% of the time, as far as we know (if we had more data that revealed 1000/1000, we could probably say something like 99.9% of the time, and so on). So, as far as we know, this isn't a 100% chance, so I "run it through a probability modifier".

I have a sense though, and it gives me enough idea to tell you that you are completely wrong. There is no probability involved in determining whether the money is in the box or not. Period.

Yes there is! If we knew nothing about his predictions, we'd have to assume that he makes them randomly, so we'd give ourselves a 50% chance of finding the money in box B by choosing only box B. Given the fact that he's been correct 100/100 time, we'd give ourselves somewhere around a 99% chance of finding the money in box B by choosing only box B.

Unless you are implying that he is infallible... something not guaranteed by the original question.
 
I can't explain it. I'm trying to look into it. I'm sure that there's some probability function that provides a probability density function of the probability given the number of successes and the chance of a prediction-less success. (and by trying, I mean I'm not actually doing anything, since I have better things to do, but I might ask one of my friends)

Okay, here's why I think you can't apply the math in the way that you are: while his accuracy in the past does, in some abstract way, suggest that he will be accurate in the future, I don't think we can assign a probability of accuracy to that and say anything meaningful. Economics and statistics work on the assumption that with enough iterations everything will work out, but you only get to choose the box once. In other words, there is no reason to believe that expected value is equal to value in real terms here.

If we KNEW that there is a 99% chance that the alien will be right, then yes, of course we could use expected value as a reasonable metric of which box to pick if we had no other. However we do not know this.

Yes. I am also saying that by some calculation that I have not done and am not sure of the accuracy, the data suggests that he is likely more than 99% correct in his predictions.

Regardless, I do have the calculations showing me that to pick both boxes, I would have to believe that his prediction rate is less than 50.05%, and I'm pretty sure that the calculation mentioned above would give him a much higher prediction rate.

No matter whether he is right or not, he will be more than 99% right in his predictions because of his previous correctness. This tells us nothing.

I agree that the data suggests that he is very good at predicting. However, I do not agree with you that we can take this and coherently apply it in an analytical context to get a mathematical "value" for each box.

That means that your choice determines what's in the box 99% of the time, as far as we know (if we had more data that revealed 1000/1000, we could probably say something like 99.9% of the time, and so on). So, as far as we know, this isn't a 100% chance, so I "run it through a probability modifier".

So basically what you're saying is that you deal with the problem of induction by assigning a percent chance that induction works in this case? I do not think that is a rigorous response to the problem of induction.

Yes there is! If we knew nothing about his predictions, we'd have to assume that he makes them randomly, so we'd give ourselves a 50% chance of finding the money in box B by choosing only box B. Given the fact that he's been correct 100/100 time, we'd give ourselves somewhere around a 99% chance of finding the money in box B by choosing only box B.

Unless you are implying that he is infallible... something not guaranteed by the original question.

Um, no we wouldn't. We know that he is a "super-intelligent" alien. As you alluded to you reply to my coin-flipping analogy, it is vastly more likely that he has some sort of prediction system than that he is randomly picking either box. Perf's explanation covers just about any prediction system I can think of. Also, arbitrarily assigning a percentage of likely accuracy like you are doing here does not make sense.
 
Top Bottom