How weak is the AI?

The optimistic part of me says, "Yeah, let's keep showing them what's possible, and one day a game will ship with great AI!" And then another part of me says, "Or they'll just keep leaving it up to modders." :blush:
Part of the problem of having the game "ship with great AI" is that great AI requires great strategy...great strategy is only worked out after
1. the players have gotten a hold of it
2. the rules are stable

The game can definitely ship with MUCH better AI, but the AI that it ships with will always be worse than AI modded into it After people have worked out the strategy implications of the last patch.
 
Part of the problem of having the game "ship with great AI" is that great AI requires great strategy...great strategy is only worked out after
1. the players have gotten a hold of it
2. the rules are stable

The game can definitely ship with MUCH better AI, but the AI that it ships with will always be worse than AI modded into it After people have worked out the strategy implications of the last patch.

That's true, except for military tactics. No reason why the AI couldn't be strong at that right away.
 
Part of the problem of having the game "ship with great AI" is that great AI requires great strategy...great strategy is only worked out after
1. the players have gotten a hold of it
2. the rules are stable.

Great point! There's definitely a limit in that direction and modders if given exactly the same tools should in principle always be able to outdo the original developers.

But I do think a lot of the civ 5 AI weighting misses a lot of very obvious strategy elements. It shouldn't take most people more than a couple of games to realize that ranged units are really powerful because unlike melee they don't get hurt during their own attacks, and because they can project their force over distance. But we see the AI build units pretty much at random. It's also pretty elementary strategy that if you're playing for a longer game, putting resources into resource-generators is very important. This leads to the natural conclusion that investing into food, production and science is a great decision. But instead of doing that, we see the AIs spam walls, happiness buildings when they're already at +20 and triremes in lakes.
It just seems the one(s) entering the weights either had no interest in tweaking them later on, or had no time given to them.
 
But the natural reaction to that fact from the developer should be to balance range units. Not make the AI spam range units.

Making a turn base game challenging has a lot to do with balancing it. The AI doesn't understand what is balanced and what isn't therefore everytime it takes a "noob trap" options, regardless if it makes sense, the player is at an advantage. For example, taking Piety with Byzantium makes sense but it's not bad AI that is at fault for that Tree to be crappy.
Making useless walls etc and other poor decisions is a weight problem yes.

Challenge is also affected by other issues like bugs. For example the white peace bug of civ5 trivialize a lot of defensive wars in civ5 decreasing the challenge even further. A bug that most people use without even aknowledging it as a bug now. Some of it is also AI logic, like how some of its citystrategy and economic triggers are sometimes badly implemented.
 
I'm just disappointed that the AI seems to rely more heavily on front loaded bonuses rather than game long bonuses that make the first few eras an exercise in tedium (that also makes early wonders practically impossible) and the latter eras boring. Also sad that the AI is reduced from Prince level to Chieftain.
They also seem to struggle with upgrading military and escorting settlers still.

I'm not hoping for an amazing AI, just for it to have been improved, especially in some areas that have been complained about over and over again and that modest have managed to address.
 
The AI has always played on Chieftain, though I agree with you that frontloaded bonuses suck.

Prior to Civ V, the AI bonuses for everything was found in fields labeled AI_ rather than borrowing the same field used for the Human.

Civ V Vanilla introduced having some AI fields borrow the human field along with setting the level at which to borrow to Chieftain while having others use the AI_ + having happiness multiply these values.
G&K was exactly the same way. BNW introduced the AI Default Happiness Level, and switched the AI to use that instead of Chieftain but kept the multiply AI happiness values from human difficulty level and AI Default Happiness Level field.
(This was primarily so they could give the AI a science bonus compared to the human at all difficulty levels to offset the new per city science penalty for the AIs whose low science flavors wouldn't even build Libraries, but also remove the AI extra happiness per luxury but start with a larger flat amount of hapiness)

Front loaded AI bonuses have been in every single version of Civ, all the way back to Civ I. It has always been the case of starting out behind but outplaying the AI to catch up and pass them.

At a minimum, Civ VI (at least on CD release) is going to play at Chieftain level and won't be multiplying happiness bonuses across two fields. I strongly suspect though that either a balance patch or the first expansion will reinstate the "AI Default Handicap level" to allow more fine tuning of AI bonuses. But with libraries giving flat bonuses instead of 50% while Civ VI continues to give direct science yield (and now also direct culture yield) from population of the city, it will be much easier for AI to turn it's increasing food advantage as the difficulty level increases into progress thru the tech & civic tree than before.
 
Front loaded AI bonuses have been in every single version of Civ, all the way back to Civ I. It has always been the case of starting out behind but outplaying the AI to catch up and pass them.
And I've always hated that.

The rest of your post was very informative, thanks.
 
But I do think a lot of the civ 5 AI weighting misses a lot of very obvious strategy elements. It shouldn't take most people more than a couple of games to realize that ranged units are really powerful because unlike melee they don't get hurt during their own attacks, and because they can project their force over distance.
Whether this is true is dependent on the relative strength of range attacks vs. melee attacks and other mechanisms. For example, the balance could be such that mass cavalry is the way to go.

On top of this, the power of ranged units relies on the ability of the user to keep them safe. This automatically makes them less useful in the hands of the tactically weak AI.

That being said, the weights could typically be a lot better even at launch. One approach would be to set the weights using an evolutionary algorithm and having the AI play itself, a lot. This would require some significant dev investment to setup. (e.g. making it possible for different AI players to use different weights, possibly adding on option to run a (all AI) game without graphic engine, etc.) This effort may well pay-off in the long run, and wasted time tinkering with weights manually.
 
Whether this is true is dependent on the relative strength of range attacks vs. melee attacks and other mechanisms. For example, the balance could be such that mass cavalry is the way to go.

On top of this, the power of ranged units relies on the ability of the user to keep them safe. This automatically makes them less useful in the hands of the tactically weak AI.

That being said, the weights could typically be a lot better even at launch. One approach would be to set the weights using an evolutionary algorithm and having the AI play itself, a lot. This would require some significant dev investment to setup. (e.g. making it possible for different AI players to use different weights, possibly adding on option to run a (all AI) game without graphic engine, etc.) This effort may well pay-off in the long run, and wasted time tinkering with weights manually.

Trias, the issue with using some sort of learning algorithm to 'learn' what the weights should be is your training set. Specifically, you are suggesting that the AI learn how to play the game by playing against itself. If your algorithm works, what the AI will have 'learned' is how to play against other AIs. In essence you will have overtrained your AI, because it's not learning to play against the entity you want it to be playing against (humans).


I do appreciate Siesta's commentary a good deal. Assuming the flavors are publicly accessible, tuning the AI should be as simple as changing the values, no? I will point out that certain tactical issues with CiV AI (like move and shoot) have already been seen in CiVI, so hopefully AI tuning will be mainly at the strategic level with flavors.
 
Actually... I'm started think the game doesn't need strong AI at all. Strong means focused and playing against focused AI is not fun. So, having weak AI without definite plan is not only easier to implement and easier to calculate, it's better player experience. And there are always AI bonuses for challenge, but even in challenge it needs to develop different areas of the game.
 
Trias, the issue with using some sort of learning algorithm to 'learn' what the weights should be is your training set. Specifically, you are suggesting that the AI learn how to play the game by playing against itself. If your algorithm works, what the AI will have 'learned' is how to play against other AIs. In essence you will have overtrained your AI, because it's not learning to play against the entity you want it to be playing against (humans).

It still should provide a significant better baseline then the current "finger against the wind" initial weight values. In particular, since "playing the game" involves a lot more than just interacting with other players. In essence such a learning exercise would teach the AI to follow path that will efficiently get it to a victory appropriate to its set flavours, while not losing to other agents. In particular, it should provide good weights for techs that do not give immediate benefit (associated with current flavours/agendas) but gives access to tech that do, etc.)

Obviously, learning against other AI, is not ideal (but has the advantage that you can potentially run a lot of iterations in a short time span). For this reason your probably should not let it run too long to prevent overspecialization.
 
Actually... I'm started think the game doesn't need strong AI at all. Strong means focused and playing against focused AI is not fun. So, having weak AI without definite plan is not only easier to implement and easier to calculate, it's better player experience. And there are always AI bonuses for challenge, but even in challenge it needs to develop different areas of the game.

There is certainly an optimum there somewhere. You don't want the AI to be too focussed. On the other hand, you do want the AI to play solid enough that it can keep up in the late game. This means it does not to properly utilize some of the compound bonuses that are inherent to civ. If it doesn't it will invariably fall behind as the player snowballs his/her bonuses, no matter what bonuses you give it.

The ideal AI will play a solid game, as in not making stupid mistakes. However, it shouldn't be overly focussed (doing deep beelines to certain techs to get an advantage, etc.)
 
It still should provide a significant better baseline then the current "finger against the wind" initial weight values. In particular, since "playing the game" involves a lot more than just interacting with other players. In essence such a learning exercise would teach the AI to follow path that will efficiently get it to a victory appropriate to its set flavours, while not losing to other agents. In particular, it should provide good weights for techs that do not give immediate benefit (associated with current flavours/agendas) but gives access to tech that do, etc.)

Obviously, learning against other AI, is not ideal (but has the advantage that you can potentially run a lot of iterations in a short time span). For this reason your probably should not let it run too long to prevent overspecialization.

I disagree. Generally speaking, you are not guaranteed to learn anything useful if your training data isn't correct.

Shall we talk in specifics?

First, what is the update mechanism of your algorithm? This is crucial for figuring out what you want the algorithm to 'learn'.
 
I don't think we will see much. You only get to see how bad the AI truly is when you let it play against a human player. If you pit weak AI against weak AI it will all look on par. Especially if the weakness is on the strategic side (slow teching, unit upgrading, diplomacy,...)

Not necessarily. AI can be dumb with no player involvement whatsoever. (EG: Ranged naval unit shoots city forever until the city sinks it)

Sure, two bad AIs will probably not result in one pulling off an amazing strategic victory over the other, but we'll be able to see them both make bad decisions.
 
I disagree. Generally speaking, you are not guaranteed to learn anything useful if your training data isn't correct.

Shall we talk in specifics?

First, what is the update mechanism of your algorithm? This is crucial for figuring out what you want the algorithm to 'learn'.

It a basic level (not to concerned with computational efficiency or convergence rate) I was think about the following:
1. Create X "versions" of the AI setting values for the various AI weights/flavours.
2. Let these play a fairly large number of games together.
3. Evaluate the performance of the AIs against some metric. (You could take the game score, or something slightly different depending on the desired behaviour of the AI. For example, while extremely early victories produce a high score, that is not something we may want to encourage. On the other hand, we would want to reward AIs that are successfull in all victory conditions.)
4. Keep the top performing AIs. (May need a secondary criterium to ensure a health "genetic diversity". For example, you want to keep around some AIs that are extremely effective at early victories to ensure robustness of the result against various play styles.). Create a new set of X "versions" by applying minor variations to the weights, and repeat.
 
I don't think we will see much. You only get to see how bad the AI truly is when you let it play against a human player. If you pit weak AI against weak AI it will all look on par. Especially if the weakness is on the strategic side (slow teching, unit upgrading, diplomacy,...)

To some degree you are right; however, we were able to see some flaws with the AI, such as its refusal to upgrade units (presumably caused by the AI's strange problem with trading for otherwise unobtainable resources) and its indecisiveness when pursuing victory types.
 
It a basic level (not to concerned with computational efficiency or convergence rate) I was think about the following:
1. Create X "versions" of the AI setting values for the various AI weights/flavours.
2. Let these play a fairly large number of games together.
3. Evaluate the performance of the AIs against some metric. (You could take the game score, or something slightly different depending on the desired behaviour of the AI. For example, while extremely early victories produce a high score, that is not something we may want to encourage. On the other hand, we would want to reward AIs that are successfull in all victory conditions.)
4. Keep the top performing AIs. (May need a secondary criterium to ensure a health "genetic diversity". For example, you want to keep around some AIs that are extremely effective at early victories to ensure robustness of the result against various play styles.). Create a new set of X "versions" by applying minor variations to the weights, and repeat.

That, specifically, will only train the AI to play against the AI. It sounds very like a swarm approach to optimization, which like many supervised algorithms will tend to overtrain.

A better metric would be ranking of civs in output over time. E.g., 'a good civ should have this level of output in each of the outputs--avg. production per city of 8, faith prod. of 10, etc. etc. at turn 50/100/150/etc.' But then, you don't need to have AIs play each other necessarily to do this type of learning. You could, but it would be fairly counterproductive, because as soon as you get interaction, the AIs will 'learn' how to do the above using other AIs, and not human players.
 
Back
Top Bottom