Just pushed to SVN (5260):
The last two of these are part of a general work-in-progress to try to analyze and address why the AI is falling behind in early research. This process is not yet complete.
@AIAndy/DLL modders:
The last one here is a total kludge. The gist is that the AI had no code for evaluating outcome probability modifiers that apply to promotions. However, doing this properly would be incredibly difficult, because:
The kludge I have put in simply evaluates the total outcome modifier percentage (for all outcomes) a promotion provides and assumes that is a proportionate measure of desirability specifically for units of AI type UNITAI_HUNTER. In general this isn't very good (outcomes could be bad, outcomes could be utterly unrelated to hunting, outcomes might only be triggerable in contexts that are not likely to occur [e.g. - requires a tech 2 eras in the future]), etc., but it works adequately for current usage (where the dominant use of outcomes is animal subduing).
What I'd like to brainstorm a bit is a better mechanism we could migrate to over time, that doesn't require a full evaluation (due to the difficulties listed above), but would be more accurate for a broader spectrum of outcome mechanic usage.
My own suggestions would be new tags on <OutcomeInfo> that provide a unitAI-specific AI relevance (not a weight in the traditional sense, since that requires correct scaling which is almost impossible for an XML modder to determine). Thus the subdual outcomes would have a relevance set to a high value (between 0 and 100 say, so near the 100 end) for UNITAI_HUNTER and a low value for most other AIs. This could generalize reasonably well (e.g. - high relevance to UNITAI_ATTACK for the outcome that leads to captives), but is easy to implement, while still being accessible to the XML modder to set up and scale.
Opinions and suggestion welcome...
- Fixed attacks that do not use movement points when they should
- Fixed nasty bug with great commander AI that tied up a hunter and another unit to no effect in a city
- Tweaked AI building order so that research is not queued up behind food indefinitely
- Tweaked AI promotion evaluation to try to favor subdue boosters on hunters
The last two of these are part of a general work-in-progress to try to analyze and address why the AI is falling behind in early research. This process is not yet complete.
@AIAndy/DLL modders:
The last one here is a total kludge. The gist is that the AI had no code for evaluating outcome probability modifiers that apply to promotions. However, doing this properly would be incredibly difficult, because:
- The modifiers are not properties of the promotion, they are properties of outcomes that apply to promotions, so a search over outcomes to see what maps to a given promotion is necessary (that part is easy though)
- The outcomes do not contain information about what they do - another search of missions to find out what outcomes trigger them is necessary (so another level of association back to the promotion we're trying to ultimately evaluate)
- Whether the outcome should be considered is not easy to evaluate - the triggering conditions cannot be (fully) evaluated since we're not in the context where they might actually trigger - we're in the context of a unit that might be able to trigger them at some future time wants to know if a promotion is worthwhile
- Whether the outcome is desirable can itself be a contextual decision, and again (as for the previous point) the context we're evaluating in is potentially many turns prior to where the actual outcomes trigger will occur
The kludge I have put in simply evaluates the total outcome modifier percentage (for all outcomes) a promotion provides and assumes that is a proportionate measure of desirability specifically for units of AI type UNITAI_HUNTER. In general this isn't very good (outcomes could be bad, outcomes could be utterly unrelated to hunting, outcomes might only be triggerable in contexts that are not likely to occur [e.g. - requires a tech 2 eras in the future]), etc., but it works adequately for current usage (where the dominant use of outcomes is animal subduing).
What I'd like to brainstorm a bit is a better mechanism we could migrate to over time, that doesn't require a full evaluation (due to the difficulties listed above), but would be more accurate for a broader spectrum of outcome mechanic usage.
My own suggestions would be new tags on <OutcomeInfo> that provide a unitAI-specific AI relevance (not a weight in the traditional sense, since that requires correct scaling which is almost impossible for an XML modder to determine). Thus the subdual outcomes would have a relevance set to a high value (between 0 and 100 say, so near the 100 end) for UNITAI_HUNTER and a low value for most other AIs. This could generalize reasonably well (e.g. - high relevance to UNITAI_ATTACK for the outcome that leads to captives), but is easy to implement, while still being accessible to the XML modder to set up and scale.
Opinions and suggestion welcome...
WOW, now why didnt someone else come up with something like this YEARs ago, this is absolutely needed 