Newcomb's Problem

Read the thread.


  • Total voters
    212
Stuff below also applies to Gogf:
Say RealPerf makes a decision to pick box B, and then Omega simulates RealPerf. We now have SimPerf, and since SimPerf is exactly like RealPerf, he will choose box B. So then, Omega goes to RealPerf, and presents him with two boxes. RealPerf also picks box B!

So the causation is RealPerfs initial choice, before the simulation was run. :)
That destroys the point of Perfection's construction. You might as well abandon the whole idea of simulations, and just say that if Omega predicts you to pick B, you will pick B (and if not, you won't), making Newcomb's problem moot, as your choice is predetermined by Omega anyway (this already being one of the classic possible answers to the question). (Of course, we don't actually know this, just as we don't know if Omega has a simulation.)

I'm guessing that SimPerf and RealPerf having to choose the same thing is not what Perfection intended.
Sure it does! The entire game would change if the values in the boxes were switched. Or if the values were varied.
Well, OK. But in that case, I'm not sure that Perf's construction answers the problem any better than much simpler answers. I need to think about it more....
 
The calculations would be very complex, especially because this function would provide a pdf, which is continuous (I can provide more details in this regard if you want). Why it's a valid tool? This is because it would evaluate the overall expected value of each choice. Then we could see which choice is optimal.

That does not explain why it would be a valid tool, all it tells me is what it's supposed to do. How does knowing the "expected values" tell me anything useful in this context?

(Please explain WHY the choice is worth its expected value, don't just tell me that it is again.)

Why is expected value the best way to evaluate something? This is because we need to look at the big picture.

Suppose that you have an infinitesimal (say 1/2^100) chance to win a billion dollars. However, you have to buy this chance with a million dollars. Should we look at only a few possibilities, or should we look at the whole picture? Sure, for that one time in 2^100, you'll end up 999 million dollars richer! But that doesn't mean that taking the chance is the logical thing to do. You have to evaluate, given a large number of you doing the same thing, what does each you end up with on average.

Please stop giving me examples of where the expected value obviously comports with what the best choice is. That tells us nothing. I am not denying that the item that has the "highest expected value" often is the "best choice," but I don't think you can say that having the highest expected value causes something to be the best choice.

Likewise, we have to evaluate the expected value of this situation:

If a million one-boxers have 1000000*$990000 distributed amongst them on average, and a million two-boxers have 1000000*$11000 distributed amongst them on average, then it's better to be a one-boxer.

It depends on which one-boxer or two-boxer you are, since you aren't deciding whether or you will become a randomly-selected one-boxer or randomly-selected two-boxer.

Well, if we don't say we know it (given that it's the most likely), then we'd have to go through those complex calculations mentioned above.

Er... what? My question is, how doing going through those calculations tell us that he has a certain accuracy rate? In other words, why are the calculations valid in this scenario?

If you "know" that's what it will contain, then the alien's predictions are 100% correct, something that cannot be derived from the fact that he's been right 100/100 times.

No, but it can be derived from other means, thus demonstrating that an inaccurate statistical analysis is unnecessary.

That's exactly the thing. We could say exactly how good he is with an exactly level of uncertainty. I.e. there's a 30% chance that he is 99% correct (and this is the highest chance). Basically, we could evaluate the likelihood of each of these prediction rates: 1%, 2%, 3%, ..., 99%, 100% ,but with real numbers instead of integers.

If we were good at math, we could say the exact level of uncertainty for each exact level of "goodness" that he is. Then we could evaluate the entire situation.

And how does saying "how good he is with a certain level of uncertainty" help us determine anything in the least?

("It gives us the expected value" is not an answer to this question, because it has yet to be demonstrated that the expected value is a useful quantity to have in this specific situation.)

Past results do not determine future results (not in a mathematical sense). However, they do predict them. We can predict future events based on past events, with a certain level of uncertainty.

No they don't. You have no way to convincingly demonstrate to me that past results predict future results. However, I'm not going to take an absurd philosophical position and argue that past results NEVER predict future results. What I am going to argue is that you have no reason to believe that they do in this situation.

Nothing that you say in your statistical analysis addresses either of the arguments put forth in this thread, both of which say that it is ALWAYS best to pick a certain box, not just when the alien predicts correctly based on a probability curve.

Perfection is assuming an arbitrary system: the alien is using simulated versions of you to figure out your response. I'm trying to ignore any such arbitrary declarations.

There are three possible scenarios I can imagine:

1. The alien choses randomly and got extremely lucky. We have no reason to believe that this is true, but if it were, then we should pick both boxes.
2. Perf's situation exists, and it is best to pick box b. The evidence (the past 100 choices) seem to support this conclusion.
3. Backwards causality exists (this is akin to making other ridiculous assumptions, like that the alien is lying to us about all of this).
 
When people come up with overly cute solutions like these, Paradoxers like to ask things like this:

Consider Newcomb's Paradox Plus, which is exactly like Newcomb's Paradox except you know that Omega's prediction mechanism is such that no simulation exists.

What would be your answer to Newcomb's Paradox Plus?

Well, Fifty, I can imagine three ways that Omega is predicting stuff:

1. He is predicting randomly and has gotten very lucky. In this instance (which is highly unlikely), picking both boxes is best.
2. He is basing his prediction on what you intend to pick. How he does this is undetermined, but it is reasonable to assume, since he is "super-intelligent" and has been so successful in the past, that he will predict any attempt to "cheat the system." In this case (whether he decides by simulation or not), using Perfy's rule will yield the best results.
3. Backwards causality exists. That's flies in the face of logic, and it's no more reasonable to consider than the possibility that Omega is lying to us about all this stuff.

The evidence (Omega's super-intelligence and the past correct predictions) suggest the second possibility.
 
WillJ:

Obviously if SimPerf chose B then box B has money in it, however by definition the two Perfs must choose the same option, or else the simulation will not have been perfect, which is explicitly not the case.

And yes the upshot for RealPerf is that Omega's prediction will always be right, so if you choose B you will be in the money, every time. I don't quite see how that makes the problem moot however?

Edit: Gogf put it much better, and yes I think Perfection did intend for SimPerf and RealPerf to necessarily make identical choices.
 
Stuff below also applies to Gogf:

That destroys the point of Perfection's construction. You might as well abandon the whole idea of simulations, and just say that if Omega predicts you to pick B, you will pick B (and if not, you won't), making Newcomb's problem moot, as your choice is predetermined by Omega anyway (this already being one of the classic possible answers to the question). (Of course, we don't actually know this, just as we don't know if Omega has a simulation.)

I'm guessing that SimPerf and RealPerf having to choose the same thing is not what Perfection intended.

I'm pretty sure it is. The way I read it is as an explanation of WHY Omega's prediction is right.
 
When people come up with overly cute solutions like these, Paradoxers like to ask things like this:

Consider Newcomb's Paradox Plus, which is exactly like Newcomb's Paradox except you know that Omega's prediction mechanism is such that no simulation exists.

What would be your answer to Newcomb's Paradox Plus?
Well I mean you don't even need a simulation!

Like, when you pick up a random rock off the ground, you don't have to test that that particular rock will fall at -9.81 m/s^2 to know that it should.

So Omega is just so omega good at physics that it can predict what you will do! The way your brain works! You don't actually make a decision!

What's so hard about that?

(Aw, I don't want to sound anything resembling not nice :()
 
I understand your point Fate. :)

It is one of the different ways that you can show that box B is the smart option!

The main practical difference with Perfection's simulation is that your solution does not allow for free will, I think. And that's fine because either one works within the bounds of the question!

For what it's worth as far as I'm concerned it doesn't matter by which method Omega predicts. After 100 accurate predicions, I'm willing to take it as a given that Omega is not simply lucky, and that one way or another, he has a good chance of predicting my choice. To shamelessly repeat what I posted on the last page:

Imagine a man who bet on the winning horse in 100 horse races in a row. Would you suppose that he is flipping a coin to pick his bets, or do you suspect that he is getting inside tips, or fixing the races? If you had to bet on a horse, would you pick a different horse to him?

Cheers all :D
 
Well, Fifty, I can imagine three ways that Omega is predicting stuff:

1. He is predicting randomly and has gotten very lucky. In this instance (which is highly unlikely), picking both boxes is best.
2. He is basing his prediction on what you intend to pick. How he does this is undetermined, but it is reasonable to assume, since he is "super-intelligent" and has been so successful in the past, that he will predict any attempt to "cheat the system." In this case (whether he decides by simulation or not), using Perfy's rule will yield the best results.
3. Backwards causality exists. That's flies in the face of logic, and it's no more reasonable to consider than the possibility that Omega is lying to us about all this stuff.

The evidence (Omega's super-intelligence and the past correct predictions) suggest the second possibility.

There is also 4.
4) Omega lies about the fact that he predicts what we will do before we do it.

He however always rewards those who chose B and not A+B with the million.

There is just a simple mechanism in the boxes so when you choose only B you win a million.

Why did he say about the whole prediction issue in the first place ? To screw with our minds. To test if we are going to trust an 100% success rate or if we are not because he mentioned prediction. It's the ultimate gag. The ultimate test.

It is also unknown whether Omega is an alien but it is more likely that he is not.

I think this theory is just as viable as Omega being Superintelligent and has a method about the whole thing that we can't explain. In fact it is the best way currently to explain what it is happening with the available evidence.
 
EDIT: What's interesting here is that people who don't have that rule but still pick box b won't necessarily get the money!
That's only true if the Simulations aren't perfect, and won't ALWAYS do what RealPerf would do.

If not, then the "simulations" are simply a restatement of the fact that Omega is ALWAYS right.

If so, then you're left with the same dillema as before, except you're deciding whether the other "you"s are correct, instead of deciding whether Omega is correct.

Well, there are other possibilities, but they generally rely on backwards causality where your choice causes something to happen in the past. My explination explain why one box is is the best choice without resorting to backwards causality.
It's not backwards causality, it's just regular, bog standard causality... Whatever it is that makes you pick Box B, Omega knows it. To me, it seems far harder to imagine a perfect SimPerf created without Omega knowing whatever made you pick Box B. I mean, he can create a superadvanced robot version of you, that would do everything you would do, but he doesn't know what you would do in this one single (quite simple) situation? Don't make no sense!

But anyway, this is secondary. IF Omega is always correct, for whatever reason, it's ALWAYS better to pick one box than two boxes. That's all you need to know.

What you're missing is that all the Perfs will always make the same decision.

Bingo! Every Perf does the exact same thing.

When people come up with overly cute solutions like these, Paradoxers like to ask things like this:

Consider Newcomb's Paradox Plus, which is exactly like Newcomb's Paradox except you know that Omega's prediction mechanism is such that no simulation exists.

What would be your answer to Newcomb's Paradox Plus?

Exactly! Perf's scenario is simply a restatement of the fact that Omega is ALWAYS correct! In order to answer the question, you don't need to know anything else.
 
Can you elaborate? What exactly is a "determinative relation," besides cause and effect? SpockFederation's thoughts are my thoughts too: A&B is the logical choice assuming common-sensical (but not necessarily true) physics, and just B is the logical choice if our choice causes Omega's prediction (a future event causing a past event, strange but maybe true). What other possible "determinative relation" would induce us to choose just B?

Short version: "determinative relation" just means cause & effect.

Longer version: only, not exactly. See, the words "cause" and "effect" imply time-order. If two events are linked by a law of nature, we call the first one "cause" and the second one "effect". Now, there may be more to the average Joe's conception of cause and effect than just time-order plus laws of nature. But the rest is dispensable, I think.

So what is a determinative relation? Suppose you have an event, X, at a certain time, and you have some laws of nature (conservation of energy, of momentum, etc. etc.). Suppose that from these facts you can logically derive another fact W (Y) earlier (later) in time than X. Then X is determinative of W (Y).

For rational decision making, time-order doesn't matter if your choice is determinative of a result.
 
The alien could also be leaving boobytraps in boxes such that the person dies if they 'foil' Omega's prediction. That way, we're only hearing testimony from the survivors and thus our viewpoint of the scenario is biased.
 
The alien could also be leaving boobytraps in boxes such that the person dies if they 'foil' Omega's prediction. That way, we're only hearing testimony from the survivors and thus our viewpoint of the scenario is biased.

It is simple and makes sense. I like it .
 
The calculations would be very complex, especially because this function would provide a pdf, which is continuous (I can provide more details in this regard if you want).

Please do! (provide more details).

The calculations are too complex?

Consider the following problem :
What if the predictor was correct 1 time out of 1 time? In other words, this is only the second time we see him and he was correct last time. Assuming that his predictions are Bernoulli trials (which seems to be the assumption you are making when you try to assign a percentage of success to his predictions), find the pdf of the expected value of this Bernoulli trial.


Basically replace the 100 observations in the original problem by a single observation (or any other n>0, if you wish).

The point is that there are assumptions missing about the prior distribution for us to be able to computer the posterior distribution.
 
It's implicit in the question that the data supplied by Omega is correct and has not been fudged in any way, I think.

Crosspost, this is a reply to El Mac an Scy's posts above.
 
Omega doesn't even need to have a 100% chance of being able to guess your prediction. As long as he can guess with more than 50.05% accuracy, then it's a better deal to take box B. This number would change somewhat if the two boxes were closer in potential value, of course.
 
It's implicit in the question that the data supplied by Omega is correct and has not been fudged in any way, I think.

Crosspost, this is a reply to El Mac an Scy's posts above.
Well, there is a system like that which Omega could be working by, which is a bit too large to rightly be called "fudging". He could be using quantum suicide, as I mentioned earlier. For those unfamiliar with this, it works sort of like this:
1) Flip quantum coin to determine whether to predict that person will pick box B or not
2) Use many-worlds interpretation to create one world with and one world without money in box B
3) Present boxes and game to person
4) If person picks wrongly, destroy that universe, leaving behind only the universe in which the person picked according to the prediction.

Hence, the fact that our universe exists, by a variant of the anthropic principle, shows that it's the one in which everyone picked the box that Omega "predicted". ;)
 
Well, there is a system like that which Omega could be working by, which is a bit too large to rightly be called "fudging". He could be using quantum suicide, as I mentioned earlier. For those unfamiliar with this, it works sort of like this:
1) Flip quantum coin to determine whether to predict that person will pick box B or not
2) Use many-worlds interpretation to create one world with and one world without money in box B
3) Present boxes and game to person
4) If person picks wrongly, destroy that universe, leaving behind only the universe in which the person picked according to the prediction.

Hence, the fact that our universe exists, by a variant of the anthropic principle, shows that it's the one in which everyone picked the box that Omega "predicted". ;)

Or discard many worlds interpretation as being a colossal load of garbage in science, and walk away from the whole issue without lowering yourself to playing a game where an interpretation is so woefully without evidence - and likely to remain so forever - it's only of interest to pure mathematicians and string theorists. :D Both of which have no scientific merit or likely ever will, but one of which is honest enough to say that it never was trying to.

Many worlds is everything that is wrong with science these days, when it's used in the same breath as science. It's idle speculation for those who like sci fi and likely always will be. When you start using it to model real circumstances, that's when you lose the plot. It's a way of grasping the formalism, it should never be mooted as anything else unless it has proof. That is not science it's wishful thinking.

Suffice to say stick with probability, at least that pays dividends. MWI pays tenure (sadly) but not dividends.
 
Or discard many worlds interpretation as being a colossal load of garbage in science, and walk away from the whole issue without lowering yourself to playing a game where an interpretation is so woefully without evidence - and likely to remain so forever - it's only of interest to pure mathematicians and string theorists. :D Both of which have no scientific merit or likely ever will, but one of which is honest enough to say that it never was trying to.
Why is MWI garbage? I have the impression that it makes the same predictions as Copenhagen, thus has the same evidence, and is simpler, and should therefore be preferred.

Wikipedia:
"One of the salient properties of the many-worlds interpretation is that observation does not require an exceptional construct (such as wave function collapse) to explain it."
"As of 2006, there are no practical experiments that distinguish between Many-Worlds and Copenhagen."
"The existence of many worlds in superposition is not accomplished by introducing some new axiom to quantum mechanics, but on the contrary by removing the axiom of the probabilistic collapse of the wave packet: All the possible consistent states of the measured system and the measuring apparatus (including the observer) are present in a physically real quantum superposition, not just formally mathematical superposition, as in other interpretations."
 
He could also have visited a hundred gazillion planets, playing the same game, and it just so happens our planet is the one where he was right 100 times in a row.
 
Why is MWI garbage? I have the impression that it makes the same predictions as Copenhagen, thus has the same evidence, and is simpler, and should therefore be preferred.

Wikipedia:
"One of the salient properties of the many-worlds interpretation is that observation does not require an exceptional construct (such as wave function collapse) to explain it."
"As of 2006, there are no practical experiments that distinguish between Many-Worlds and Copenhagen."
"The existence of many worlds in superposition is not accomplished by introducing some new axiom to quantum mechanics, but on the contrary by removing the axiom of the probabilistic collapse of the wave packet: All the possible consistent states of the measured system and the measuring apparatus (including the observer) are present in a physically real quantum superposition, not just formally mathematical superposition, as in other interpretations."

Because it can never be proved or become evidence based, so it is useless to science and it's methodology? You can't prove alternate dimensions that we cannot perceive, just as you can't use string theory to prove particles or strings that are so small they are beyond our ability to infer even and always will be. to prove or find evidence for string theory it has been calculated that you would need an experimental apparatus that was the size of our solar system, if not larger? It's a pipe dream as it stands. You can use it to grasp the physics, but as far a science goes it will never be part of experimental evidence, Copenhagen at least has inferred experiments, and can be somewhat proven if not rigorously so. MWI is junk if it is used to make a statement, it is very useful if it is used to formalise physics into a graspable scenario. The wave function is real? Well prove it then, still doesn't prove MWI but would be start? There are an infinite number of dimensions or 10 or 1000? Prove it? Something, come on just some sort of way to take it as remarking on experiment? No, not ever, can't be done? So what worth has it more than a philosophical postulate?

The truth that little excerpt fails to grasp is it forgoes experimental evidence because it can never have any. That is no basis to have an interpretation, it is just lazy.
 
Back
Top Bottom