I play Marathon games, so I have LOTS of combats. In my current game, I paid closer attention than I usually do, and it struck me that the actual outcomes were seldom as good as what the forecast predicted. So I decided to do an experiment. I was conducting a battle where it was predicted that I would inflict 49 Damage against a barbarian defender. [A wounded (@45%) Swordsman vs Composite Bowman in an encampment.] Save game. Conduct combat to see the actual outcome. Load the Save and conduct the battle again. Rinse and repeat. For the first nine runthroughs, the outcome ranged from as low 40 -- 40! Nine under??? -- to 47. It wasn't until the tenth runthrough that I finally got a value of 49. It took another three attempts before I finally exceeded the predicted 49. Now I always thought that the predicted value was the average that would occur. That it was the tippy-top of a bell curve distribution. That means that values lower or higher than the predicted value would be harder to get as the standard deviation became greater as you moved lower (left on the bell curve) or higher (right on the bell curve. It would also imply that the likelihood of getting a _58_ was just as great as the likelihood of getting that 40. Similarly, it would be just as likely to get a 51 as it would be to get a 47. Yet, in 11 of 13 random samplings, the results were all left of center while there was only ONE result that was right of center. I conclude that either the utilized Random Number Generator has some unforeseen bias that skews the outcome bell curve to be significantly left of center -- most results WILL be less than what was predicted -- or else the programmers deliberately lied to us by making all likely outcome predictions overly optimistic.