First of all I gotta say this - I love everything Erik has posted, showing some true human spirit there, and obviously humor
As to a couple of other minor loose ends - someone mentioned the "social" factors of being on a game show or in an experiment. I really think it is best to disregard these, if only because in the first place I made up the game show situation. The original problem you'd read in any links in fact involves travelers who are being compensated by an airline. But at any rate, this scenario introduces some random social "reputational" factors as well which is why I tried to avoid it.
So I guess I'm glad to see the responses of everyone else agreeing with me, but then again people are right to critique a couple lines of reasoning - I don't think this problem is adequately addressed with trying to mathematically find probabilities against "random" or something - that's not the right approach. I think it is useful to look and see why "random" beats $2, which really makes playing $2 seem kinda ridiculous, but it can hardly be used to distinguish between the high number cases.
I've seen it argued but without any real proof that $2 must be rationally chosen because it is a Nash equilibrium, but again, that seems if anything like a misinterpretation of game theory to me. To be clear, I don't know what if any mixed strategy equilibrium this game has, however, I don't think it satisfies conditions that *forces* you to pick the Nash equilibrium, namely:
The game is NOT zero-sum
The is no explicit requirement for risk dominance as opposed to payoff dominance, furthermore, there is no strictly dominant strategy.
The game is not iterated and involves no chance at communication with the other player.
There's a good likelihood the game doesn't really even have complete information - your opponent is supposed to be maximizing his/her gain as well, and there's no way to know or assume that he/she will pursue a particular equilbrium strategy.
I would say I'm very glad to see the majority of the posters here agreeing that $2 is rather senseless, which leaves me still puzzled as to why that was the original conclusion. Again, I know that IF the goal was to "beat" the other player then $2 is optimal, hands down, one of those rather unhappy conclusions of game theory, but this is not the case. Likewise, as said above, if you KNOW what your opponent is going to play you want to undercut by one, but the game isn't set up in rounds where you and the other player go back and forth undercutting each other. And so I can only believe that some assumptions from alternate cases like that are being applied when they really shouldn't, since I'm not the type of person to try to argue that game theory in general is BS since it is rather useful, only that I think the original conclusion to this problem was mistaken applications of game theory.
So in the end I do like the conclusion of choosing $100 out of principle but if I really had to lay the chips down I think one could be better off with 97, 98, or 99 - if anyone tried out that similation from the university of Virginia earlier, you would see how you do can gain benefit from chumps trying to always play the max (though the bonus/penalty to payoffs were larger, which I imagine rather affects the game).