I agree, the hammers already spent are a thing of the past. However, the hammers remaining to complete whatever is being built should be factored in. Let´s say you need two turns to complete a really expensive wonder - but are under attack from a fairly weak civilization. Continue the wonder, THEN produce defenders, or do so right away?: The appearant value should be "benefit" divided by "hammers to complete". So even a fairly small "benefit" could reach a high value, if the "hammers to complete" is small.
So it is not a trivial decision for the AI, and you will find even humans coming to different conclusions. Switch to Archer, and let someone else get Stonehenge? Maybe I have enough defenders after all? If I get a lucky roll, they will hold long enough... but, on the other hand, if I loose that city? Play safe, or gamble? Finish that missionary (I WILL need one eventually later), or put everything I have into that Axemen-rush?
Not a quick fix for jdog to make here, I think.
Janov
I agree. Now that I think about it, when I was talking earlier about factoring in sunk costs, this is more appropriately what I was getting at. Sunk costs in and of themselves should not be a factor in anyone's decisions, but what they do is create a situation where you can expect the same expected benefit from less future cost. So if you have 20 hammers put into a monument, and need 10 more, and you are deciding between finishing that and building an archer, the choice is not between building a 30-hammer monument and a 25-hammer archer, but between building a 10-hammer monument and a 25-hammer archer.
Now, in order to weigh these variables, the AI would have to take actual costs remaining and expected benefits (and possibly turns until payback) into account. The new additions to the BUG mod actually make this quite easy for humans to do intuitively. I go to build a harbor and find that it will net me 2 commerce and 2 health in actuality.
If I wanted to not rely on my intuition for comparing tradeoffs, I could calculate a payback period or expected net return on investment for every build decision in the game. I could reduce the remaining hammer cost of the harbor, the commerce bonus, and the health bonus to some common currency of worth,* and compare the cost to the benefits.
This common currency need not necessarily assign static equivalencies between different outputs. For example, let's say we call the AI evaluation currency "AIgil." We could say that 1 extra health is worth 4 + 0.5x AIgil, where x is the current amount of unhealthiness in the city. So for a city with more than +8 health, another point of health is essentially evaluated as worthless by the AI. Now, let's say we make 2 commerce = 1 hammer = 1 AIgil, just for simplicity's sake. Now, for this city, the harbot costs 80 AIgil. It gives 1 AIgil per turn (2 commerce + 2 health that aren't worth anything in this city). Expected payback period = 80 turns. The AI could compare that to building a forge (-1 health, +25% hammers): costs 120 AIgil, pays back 3 hammers/turn (3 AIgil/turn), so payback period = 40 turns. In this case the AI would opt to build the forge. But let's say the city already has 6 unhealth. Then the forge costs 120 AIgil and pays back -1*(4 + 0.5*6) +3 AIgil, or -4 AIgil per turn. This structure would never pay itself back in AIgil. Then the AI would not build this building.
These numbers are just hypothetical, but what if the AI, instead of following pre-programmed build tendencies, could dynamically problem-solve for deciding its build orders by reducing all building effects to a common currency of worth and comparing?
Of course, to properly do this, you'd have to figure out how to make the equations of worth reasonable, taking into account variables that will make 1 output (food, beakers, happiness, health, espionage, GPP, military strength) more valuable than another. One would also have to take into account things like: "Oh, this city might be earning 1 commerce and 7 hammers per turn right now, but it will be growing into working mostly cottage towns in the future, meaning that the expected commerce at each subsequent year will be {x, y, z...}, whereas the expected hammers at each subsequent year will be {a, b, c...}, and so if I integrate over the timespan of the rest of the game, with respect to a logarithmic discount rate (based on uncertainty re: being able to keep the city, plus the compounding time advantage of money that makes more stuff now worth more than an equal amount of stuff later), then I can expect my market to make X AIgil over the course of the game, vs. the forge making Y AIgil over the course of the game."
This might seem like a truly daunting piece of work (and I am by no means volunteering to flesh it out,

(If you wanted to maintain an element of randomness with "AIgil," you could still multiply certain AIgil calculations by some random number in order to make AI decisions random. If you wanted to maintain distinct AI personalities, you could similarly attach certain AIgil multipliers to certain things for certain AI leaders, etc.)
What this would require, of course, would be basically a Manhattan Project of Civ4 programming (you'd need a way for the AI to estimate the remaining length of the game, a way for the AI to remember plans, and a WHOLE bunch of very complicated and co-varying AIgil calculations involving integral calculus), but what you'd get would be an AI that could, in theory, adapt to ANY conceivable circumstance or human-player strategy variation.
Then after that, my ultimate ambition would be to have a Civ4 AI that would log human-player actions throughout every game the human ever played and compile a database of likely human-player decisions, unique to that human-player profile, so that the AI could chuck these human-player probabilities into its equations and actually "learn"(!) to counter specific human players. (You could even momentarily fool the AI by training it with a bunch of games where you always go peacefully wonder-whoring, and then you turn around on the next game and spring an early rush on the AI

Right now the AI tries to think like a human (heuristically) because it was programmed by humans. But the AI doesn't have that mysterious sense of human intuition, so the AI often fails at this imitation (although is still really pretty good (not to disparage firaxis or you awesome BetterAI modders!)). If we really wanted to harness the distinctive type of power of the computer, we would have to really try thinking like machines and program the AI likewise (deductively). Which would mean LOTS and LOTS of equations. Needless to say, like I said, I'm not volunteering.
