Tactical AI statistics V2

ilteroi

Prince
Joined
Mar 11, 2006
Messages
486
So I made some more diagrams with the latest version and I figured they might be interesting for fellow nerds.

But first some definitions.

* A run of the simulation takes an initial position of friendly/neutral/enemy units and tries different assignments (move/attack/etc) for the friendly units resulting in new positions
* The positions have scores to guide the search but once a position is completed (no more assignments possible) we check whether it is acceptable or not.
* So highly-scored positions might be discarded because they leave a unit exposed to counterattack. It is not guaranteed that we find any completed and acceptable positions at all.
* Such failed runs are excluded (but don't worry, the AI will try to do something else with their units if a simuation run does not yield something usable).
* A run terminates when there are no more incomplete positions to work on (or if we run out of memory).
* There is deduplication logic so if we already have a position with assignments ...AB we ignore ...BA (if both are moves - for attacks the order matters!)
* I used a dataset of ~7600 runs from an AI only testgame (turn 300 to turn 500). i used the lategame because of the big armies involved but i also included smaller battles in the dataset.
* On a release build even the longest runs take less than 1 second to compute on my machine so runtime performace is good enough

On to the pictures!

1. to limit memory consumption there is a hard stop after 32000 positions per run. here we see that most runs are quite short and do not reach that limit.
positions-per-run.png

2. how many positions should we evaluate to find the highest score? only very few runs really need the full length (most dots are at the bottom). blue and red dots are not overlapping, meaning sometimes the overall best score (blue) was an intermediate position whose descendants had to be discarded. the red dots are completed and accepted positions. on average we need to look at 1355 positions to find the best one but of course there is a high variance.
max-index.png


3. what happens if we limit the maximum number of positions to evaluate? not much. if we only allow 4000 positions, the best score is still 99.6 percent as good.
loss-limit-total.png

4. what happens if we limit the number of completed+accepted positions before we terminate? for the test set i allowed 1280 completed positions max. if the limit were 20 instead, the resulting scores would be 96% as good. this is the main mechanism for setting difficulty levels!
loss-limit-completed.png

That's all for now ...
 
Some questions about the Tactical AI
1. Does the AI take into account enemy position changes during their turn? Like from withdraw, Heavy Charge, Withering Fire, etc.
2. Does the AI factor in any damage done not as a result of an attack, like AOE damage on pillage (if that's implemented)?
3. Does the AI factor in damage taken after their turn, e.g. Citadel, Pilum, etc?
4. Can the AI handle gaining a unit via combat during their turn, e.g. unit capture?
 
general answer: the simulation is simplified and approximate only. because of randomness the outcome of attacks is anyway never quite sure. so when a simulation result is executed after each step i check if the outcome matches the expectation. if not, another round of simulation is done with new initial settings.

individually:
1. no, enemies are not expected to move
2. yes but only approximately
3. yes they consider "danger" from both units and plots
4. they actually try to capture civilians but not combat units
 
The reason I asked (4) is that if a unit captures another military unit, it stays on the original tile after attacking. This may create a discrepancy between the simulation and the actual execution, like other units being blocked by the attacker not moving forward. Similar to (1) where in actual execution some units may fail to attack because the target is no longer there.

so when a simulation result is executed after each step i check if the outcome matches the expectation. if not, another round of simulation is done with new initial settings.
I guess this is the solution to the problem. So we can expect them to be "handled", just not optimally performance-wise?
 
While I dont understand the graphs above....
My experience is that the AI really struggles with special movement bonus, which makes AI very vulnerable to Songhai and Iroquois in particular (effect is exaggerated on pagea like maps).
Im not a fan of these movement promos.
Mt. Kilimanjaro promo would in theory create a similar issue but its rare and I think AI isnt as good using it.

While AI can be very resilient and require a ton of grind to conquer it relies a lot on retreats moving back and forth, AIs caught in multi-pronged wars can just crumble and fall like a house of cards.
This is probably hard to solve and maybe as it should be.
 
Top Bottom