Game AI & net based machine learning

But my point is that that isn’t the slightest issue. It can go through each possibility pretty much exactly once and rule out 99% based on the quality of outcome it achieves at the end of the game

Go cannot do that, all moves are inherently more valuable and dependent on external factors

So yes, a Civ AI would require time and space, but it’s not complex, in the same way.
but how can the AI rule out "99%" of the possibility when all moves are less "valuable" and won't affect the outcome of the game independently ?

it doesn't work on a fixed map, each start will change the relative importance of each move, so it can't simply "go through each possibility once" but by a number (that I let someone better than me at mathematics evaluate) determined by all the possible maps variation.

if I had to guess number, I'd say it will rule out less than 1%: the real bad ones (like deleting the settler on turn 1)
 
I got an idea that would probably increase the combat AI significantly, by, well, cheating. As such, this "cheating AI" would probably have to be selectable off to not frustrate many players.

This AI would pretty much increase its performance by using more computation, and it would work as follows.

Define multiple (like 10) different tactical decision rules, when an AI is in direct combat with human player, that is, if units fight one another.

1) Trial phase
Move the units associated with the combat with all of these different tactics, such that the combat is not rendered visually.
2) Selection phase
Select that of the different tactics which resulted in the best outcome. This outcome could be, for example, maximize the production destroyed in terms of killing units.

This algortihm could not never be worse than one AI combat tactics, because all of these tactucs would be tested. Of course, there would be issues for example if the random seed is different in the Trial phase and Selection phase.

This approach would essentially let the player give the AI the possibility of using Save/Load "spell".

Although this is not related to deep learning per se, many posts in this thread suggest hard coded rules being a part of the learning AI, so with that respect I guess this is related.
 
it doesn't work on a fixed map, each start will change the relative importance of each move, so it can't simply "go through each possibility once" but by a number (that I let someone better than me at mathematics evaluate) determined by all the possible maps variation.

Now you have me curious how many different starting positions are possible. Basically the number of maps times the number of starting positions per map.
 
Hey, I've signed up on the site because of this thread. I find this stuff fascinating, but agree with the last 2 posts in that developing a general AI for games like Civ using machine learning isn't presently achievable. But, do you think it'd be possible to train an AI only for combat scenarios, and have the game use its deep learning derived algorithms only when it comes to warfare? I'm talking just moving units, attacking and defending, coming up with tactics to complete military objectives that a general, normal, preprogrammed AI sets.

For example, the general AI sets the objective "I don't want to lose X city" and hands control of it's military (or a part of it) to the deep learning AI that's trained for warfare, in order to defend the city.

The military AI doesn't need to be perfect, just vastly more competent and adaptable in warfare scenarios than current game AI is. In this split AI system, grand scale strategic errors that leave some parts of a civilization vulnerable would be the general AI fault for pointing it's military to a specific zone of the map. This general AI could also handle the allocation of resources for each combat scenario it faces simultaneuosly, like being attacked by civs on opposing sides of it's territory. The deep learning military AI would just "make do" with what's available to it, including maybe withdrawing troops when it's overwhelmed and can't complete the task it's been given by the general AI. That'd prompt the general AI to reassign the military AI to another task (like defending the closest city). A system like this might preserve the organic feeling of Civ leaders and the bias they show towards specific paths, despite those paths working against their chances of winning, but save us from the stupid behaviour of enemy units when it comes to fighting.

I'm not exactly an expert on AI or deep learning, but I'd guess the limited scope of moves and situations might reduce the complexity of the calculations enough for it to work in the near future. It would be a bit like playing chess with civ units, taking terrain modifiers and nearby reinforcements into account.
This is more feasible, though you still have to decide how much importance to place on recruitment and constructing defensive structures before the “AI general” came into play.
 
Hey, I've signed up on the site because of this thread. I find this stuff fascinating, but agree with the last 2 posts in that developing a general AI for games like Civ using machine learning isn't presently achievable. But, do you think it'd be possible to train an AI only for combat scenarios, and have the game use its deep learning derived algorithms only when it comes to warfare? I'm talking just moving units, attacking and defending, coming up with tactics to complete military objectives that a general, normal, preprogrammed AI sets.

I think for combat scenarios it may be viable and a very interesting thing to test. Generating combat scenarios and evaluating them should be fairly easy, hence training should be much faster and easier than a complete game play.
 
You're stuck working with incomplete information, and "possible moves" and "viable moves" have a large gulf between them in both games.
if you didn't you wouldn't need ML in the first place.
 
As someone working with it, I'd say collect the data in game in a centralized place. Then we can start debating amongst practioner on the viability of the approach. I actually have no doubt that some approaches are viable. Someone mentionned reinforcement learning before diluting its point into pointless consideration on what the world wants. My intuition is that it is the way to go. Using pretrained models from hardcore civ gamers data and tailor somehow for different difficulty level. Again intuition, it does not beat experimenting with the real data. As far as I remember Civ does get a log of everything happening in game. So we just need an API to send them and store them... and access to them and we can start talking.
 
but I can tell you that in Civ I rarely come across decisions of one turn making the difference across the entire board spelling out victory and defeat

The state of Civ 6 at present is that you can beat deity despite making a large quantity of mistakes. However, each of these decisions are still potentially decisive against better competition. I watched many players make those "every turn mundane" decisions and lose are a cumulative result of making them, often even refusing to acknowledge that decisions pointed out as mistakes were such.

Players screw up in this game constantly. It's just not enough to lose, the margin for error is big. Go is older, better understood, has better feedback available and easier measures of when a mistake is made (at minimum because there are longstanding bona fide professional players that allow for more thorough analysis of games). Its design is more conducive to competitive scenarios but I'm not sure it's actually more complex, just as I'm not sure Civ 6's added complexity is consistently meaningful or even positive.
 
Hey, I've signed up on the site because of this thread. I find this stuff fascinating, but agree with the last 2 posts in that developing a general AI for games like Civ using machine learning isn't presently achievable. But, do you think it'd be possible to train an AI only for combat scenarios, and have the game use its deep learning derived algorithms only when it comes to warfare? I'm talking just moving units, attacking and defending, coming up with tactics to complete military objectives that a general, normal, preprogrammed AI sets.

I think that's definitely the main application for machine learning/AI with civ. It seems like it could be used to create some better rulesets for how to arrange units, and move them in groups.

Anything on the empire building logic level seems like it would have issues in a game as heavily modded as civ. I.e. any mods changing the tech tree, game pace, etc.
 
Exactly, the AI wouldn't know not to delete its warrior on turn one or to trade it's cities away for luxuries, it would have to learn all of this. Perhaps it could learn this from studying data from human players, but the complexity of the game and the random maps would be crippling. Playing exclusively on a TSL earth map with the same civs each time would probably help, but not much.

That's basically what the HIRO AI - the one learning from FreeCiv - is doing, from their website(https://arago.co/ai/freeciv/):

The common approach of many AI systems is to take into account all possible combinations of a game and evaluate their „desirability“. However, this is not possible if the number of possible states is immensely large and if a game state does not provide all necessary information for the next decisions. That‘s why HIRO™ is building on a knowledge base that incorporates the knowledge of the best Freeciv players in the world. By using the knowledge about best-practices, HIRO™ can ignore many possible actions that are not relevant to win the game and pick only the best actions necessary to win.
They also have a (very high-level) chart comparing the decisions needed in Freeciv vs Go or Chess, and of course freeciv (based on civ 2) would be less complex than 6.
 
I am not a programmer, although i have brothers, sons and grand-nieces and nephews who are. My youngest son is a computer game designer who about 15 years ago won a competition while in game-developer school for a game he developed, being invited to compete in the international competition for this. He tells a story about developing the AI by providing it a set of 'rewards' to encourage certain behavior and then letting it run and see how it developed. The game, which he called Fatal Traction, was a game of robot vehicles that tried to take you out while you did the same to them - obviously nothing close to the complexity of civ. In the AI development, since he wanted the cars to run fast, obviously to make them harder to hit, and not to turn over, since that would affect both their ability to shoot the opponent and to dodge shots from the opponent, he provided a set of rewards to encourage behavior toward these goals. The map was the Hawaiian Islands, except the sea was only one foot deep between the islands so the cars could move from one island to the another. Well, some of the AI, after playing for a while, figured out that if they ran around in the ocean all the time, they would both achieve maximum speed and not turn over, which gave them the rewards they were programmed to look for. Needless to say, he had to change the priorities to ultimately get the AI to function as desired. But it just demonstrates, in a minor way, the difficulties that can be encountered in getting an AI to really achieve the results intended, not to mention how hard it is for us to 'think' in AI terms. In civ, with all the literally infinite possibilities and the various desires of the fan base in what they are looking for from these game, achieving the AI that meets all these requirements/ desires can be almost impossible. For example, I, like what I think are most, do not play for the fastest win and don't want the AI to do so either. In fact I've lost a game by one turn when I was going for an SV while another civ won a CV. And I really didn't care. I want the game to be fun, and as stated earlier, to build my empire to stand the test of time. There are times, especially in the war side of the game, where the AI could obviously use better tactics. And possibly something like was talked about earlier with this being a separate sub-AI which handled tactical battle while a general AI decided on the general focus consistent with its agenda/nature would improve that aspect of the game. But if civ were a game where the AI simply went for the domination victory every time and played with perfect knowledge and strategy, I for one would not be playing it.
 
There's no goal to create strongest game AI. The goal is to provide best user experience. Just to say:
1. Imagine Firaxis made an AI matching top players. This means best players will be able to win only 20% of the game with 5 civs on the map. Weaker players will be unable to win at all. Game will fail.
This is of course nonsense because the standard difficulties for human player would then give a positive modifier (handicap) to them, as higher difficulties do now for the AI.

2. Self-taught AI works well if rules doesn't change. With more or less regular patches (and during development new versions come out weekly or biweekly) the AI will not have time to learn. The decisions of the AI after each patch will be much more weird than they are now and it will be impossible to predict how any game change will affect AI behavior.


Much fewer patches would be needed when intelligent AI shows during testing which strategies are dominant, making the game significantly easier to balance from the outset.
3. One of the roles of AI in games like civ is demonstrate various mechanics to player and force players to use as much of the game as possible. "Effective" AI will use what's effective instead, probably ignoring game elements which doesn't fit the strategy.
When we see what elements the AI ignores, they would be buffed; this actually helps to make sure all game elements are effective.


2. AI needs to provide some kind of immersion. Remember rant about vanilla Civ5 AI backstabbing? Believe me, with optimal playing AI it would be much worse. This would cause outrage.
If it's an intelligent decision then no one will have any qualms, unless the backstabbing screws them over so much that they lose their chance at winning (in which case they need to use a lower difficulty setting=higher bonus handicap).
 
Machine learning/ computer vision researcher here. I view civ 6 as overwhelmingly more complicated as Go for the following reasons, in no particular order:
- delayed reward : the winning conditions aren't met before hundreds of turns, and small decisions like city placement can tremendous impact way down the line.
- complex game state: the maps are way bigger than chess or go, and each tile/hex has a number of attributes (terrain, ressources) impacting city placement/movement and combat. Choke points and mountains makes attacking certain cities a tall order.
- imperfect information: the fog of war cover most of the map, and one must be able to infer the other players' intention from very little
- stochasticity: random maps and randomized combat outcome make everything more complicated.
- diplomacy: reading the players' intention, bluffing and backstabbing are hard to learn through self-play.
- combinatorial explosion: learning synergies between civics, beliefs and governement would require tremendous play time. In the same idea, just the order in which the troops moved can have a huge impact, and makes the decision so much more difficult.

Basically, go would be as complex as learning a 19x19 all grassland map with no cities and just a few warriors and perfect vison. To take into account that the first turns in Go have more possible starting moves, let's make the map 25x25. Way less complicated than an actual game of civ.

These difficulties could be tackled through a few strategies:
- imitation learning: use SP and MP replays to teach good and bad approach to the AI so it doesn't start from scratch
- bespoke reward function: to reward the AI for having made good choices before the victory screen
- limited scope (my favored choice): make the AI only tackle the tactical aspect of combat to start with. Rely on bespoke (stochatic) decision trees for the rest.
This would make the AI competitive without pigeonholing it into cheesy strats. Would make the game enjoyable and the AI's invasion so much scarier.
 
programming a good AI is expensive and doesn't necessarily sell more copies of the game.

Gamification is the dominant paradigm for selling in-game and DLC content. A dynamic, challenging AI doesn't generate more money.

AI is for automation and reducing business costs by putting employees out of work and replacing them with cheaper machines.

There's no incentive to use that kind of technology in PC games, right now IMHO.
 
I have to jump here, since there is a lot of misconceptions about machine learning, and deep learning.

This is pretty much what I do, and sure deep learning is a nice tool that can make a lot. But is far from the best tool to use for a game like this.

Deep Learning is mostly a special kind of Neural Networks, with a set of sets of convolutional filters that extract features in the data. That is, patterns that can lead to a good description of the data that then feeds a NN, that works basically minimizing an error metric or maximizing a goodnes or similarity metric. This is very good for regression, clasification, interpolatiom, prediction, clusterization and many other problems. However not for this game. Where playing is in general less about exploring a solution space, and more about providing a believavble world and characters that have a set of goals.

To aproach Civ like a optimization problem is however, I think, posible, and probably around 1% of players play this way. It would need just a pair of years of full time AI developement, and the result would be most likely unplayable for anyone includding that 1% of players.

First, you need to understand that a civ game is not best understood as a numemical problem that needs a solution. It is a combination of a lot of small problems that need to be solved in a combination of different ways. Very few of them with an optimal approach.

Lets use an example, when going for a Science Victory, there is arguably an optimal approach to building and developing. To maximize science output, and to prevent the player from stoping said production. Every decission made regarding many game systems can be made to this end. And honestly, this is the last thing the player wants. And among other things will require a redesign of all civs, to negate all major advantages in certain game styles.

We dont want an AI where in every game there is a Religious Maximizer, a Science Maximizer, A War Maximizer and so on where the end goal is just a race against numbers. And no way of recovering from a bad position.

On the other hand, when approaching pathfinding and similar problems, there is a optimal approach to solve getting from point A to point B in the fastest way. This is a very old problem that can be solved optimally. And that does not require at all any Sofisticated technique. Sure, Civ UPT restriction make this problem much more difficult, but an improvement here would not be hard and will benefit the game.

Other example of a problem that should not be optimized is city building. The optimal aproach is to build as much as cities as posible, and to optimice each city distribution for the current goal of the civilization. This is another thing we dont want. We dont want the game to turn into a race for geometrical expansion in the number of cities where using a production turn for a wonder that we fancy is a mistake. This approach can work for a Starcraft, where expansion and conquest is the only goal. But not here, where the player (at least I do) stops expanding when feeling comfortable with their empire layout and builds some stuff because is cool.

There are many subproblems in a game like Civ. Trade, City Location, PathFinding, TechProgression. None of them are truly complicated and none of them require any fancy Deep Learning Aproach.

We also want, however that none of the Civs play the same, that they dont go always for the same Wonders, that they dont try to conquer City States, that they use trade and Diplomacy In a Coherent way, but not in an optimal way, since we dont want the AI to be able to backstab, or conquer their neighbours in the first turns. The best way to code this is probably behabioral trees. Which is I think the system the game already uses.

The actual problem in the civ AI is, to suboptimice in a beliebable and challenging way, rather than to use the best strategy. To solve this, the developers worked very hard trying to make the Civs to make a lot of decissions the players do not take. And yes, the AI has a lot of bugs and underplays too much in many ways. For example caring too much about grievances. But overall, this is the right approach for AI. It just need minor adjustments.

When I say minor adjustments, i mean it. The players here tend to think the IA is stupid because makes some big mistakes. It does, but 90% of the decissions the AI makes are not mistakes. And are not stupid.

The AI needs to be more aggresive in wars, to plan cities smarter, to solve some issues in war, to improve pathfinding, to take into account more factors in diplomatic exchanges, to add more sofisticated combat strategies, and just some nuisance... Believe it or not, this can be done with a couple of man-month resources in dev time, as the foundations for the AI are currently solid. (Yes, I mean this too)

The AI needed more resources, and designers should have been less worried about playing safe and be more agresive and focuss on challenge. Civ VI design filosofy has this problem all over the place, and it lacked the proper care and atention to detail, not only in AI. But this is honestly quite easy to do without any convoluted Pattern Recognition approach.

In summary. There is a reason why almost no game uses Deep Learning for AI. It just almost never works and there is almost always a better way to solve the problem.
 
Last edited:
The game wouldn't work if played optimally.

For instance, winning by religion would be almost impossible because the mechanics are really bad. The same goes to culture and diplomacy, because it's so difficult to get ahead in those areas in comparison to just defending yourself against them.

The only reliable option would be probably war and the game would turn into a watered down HOI or something like that. Not fun at all.
 
I have to jump here, since there is a lot of misconceptions about machine learning, and deep learning.

This is pretty much what I do, and sure deep learning is a nice tool that can make a lot. But is far from the best tool to use for a game like this.

Deep Learning is mostly a special kind of Neural Networks, with a set of sets of convolutional filters that extract features in the data. That is, patterns that can lead to a good description of the data that then feeds a NN, that works basically minimizing an error metric or maximizing a goodnes or similarity metric. This is very good for regression, clasification, interpolatiom, prediction, clusterization and many other problems. However not for this game. Where playing is in general less about exploring a solution space, and more about providing a believavble world and characters that have a set of goals.

To aproach Civ like a optimization problem is however, I think, posible, and probably around 1% of players play this way. It would need just a pair of years of full time AI developement, and the result would be most likely unplayable for anyone includding that 1% of players.

First, you need to understand that a civ game is not best understood as a numemical problem that needs a solution. It is a combination of a lot of small problems that need to be solved in a combination of different ways. Very few of them with an optimal approach.

Lets use an example, when going for a Science Victory, there is arguably an optimal approach to building and developing. To maximize science output, and to prevent the player from stoping said production. Every decission made regarding many game systems can be made to this end. And honestly, this is the last thing the player wants. And among othwr things will require a redesign of all civs, to negate all major advantages in certain game styles.

We dont want an AI where in every game there is a Religious Maximizer, a Science Maximizer, A War Maximizer and so on where the end goal is just a race against numbers. And no way of recovering from a bad position.

On the other hand, when approaching pathfinding and similar problems, there is a optimal approach to solve getting from point A to point B in the fastest way. This is a very old problem that can be solved optimally. And that does not require at all any Sofisticated technique. Sure, Civ UPT restriction make this problem much more difficult, but an improvement here would not be hard and will benefit the game.

Other example of a problem that should not be optimized is city building. The optimal aproach is to build as much as cities as posible, and to optimice each city distribution for the current goal of the civilization. This is another thing we dont want. We dont want the game to turn into a race for geometrical expansion in the number of cities where using a production turn for a wonder that we fancy is a mistake. This approach can work for a Starcraft, where expansion and conquest is the only goal. But not here, where the player (at least I do) stops expanding when feeling comfortable with their empire layout and builds some stuff because is cool.

There are many subproblems in a game like Civ. Trade, City Location, PathFinding, TechProgression. None of them are truly complicated and none of them require any fancy Deep Learning Aproach.

We also want, however that none of the Civs play the same, that they dont go always for the same Wonders, that they dont try to conquer City States, that they use trade and Diplomacy In a Coherent way, but not in an optimal way, since we dont way the AI to be able to backstab, or conquer their neighbours in the first turn. The best way to code this behabior are probably behabioral trees. Which is I think the system the game already uses.

The actual problem in the civ AI is, to suboptimice in a beliebable way and challenging way, rather than use the best strategy. To solve this, the developers worked very hard trying to make the Civs to make a lot of decissions the players do not take. And yes, the AI has a lot of bugs and underplays too much in many ways. For example caring too much about grievances. But overall, this is the right approach for AI. It just need minor adjustments.

When I say minor adjustments, i mean it. The players here tend to think the IA is stupid because makes some big mistakes. It does, but 90% of the decissions the AI makes are not mistakes. And are not stupid.

The AI needs to be more aggresive in wars, to plan cities smarter, to solve some issues in war, to improve pathfinding, to take into account more factors in diplomatic exchanges, to add more sofisticated combat strategies, and just some nuisance... Believe it or not, this can be done with a couple of man-month resources in dev time, as the foundations for the AI are currently solid. (Yes, I mean this too)

The AI needed more resources, and designers should have been less worried about playing safe and be more agresive and focuss on challenge. Civ VI design filosofy has this problem all over the place, and it lacked the proper care and atention to detail, not only in AI. But this is honestly quite easy to do without any convoluted Pattern Recognition approach.

In summary. There is a reason why almost no game uses Deep Learning for AI. It just almost never works and there is almost always a better way to solve the problem.

CNNs on their own would not be very useful indeed, but the general idea here would be to use some variation of reinforcement learning (Q-learning) to learn policies/strategies. Since the map is a regular grid, CNNs would be the right fit to learn an embedding (representation) of the game state to feed to some kind of huge recurrent neural network (most likely a LSTM).

To assist the AI with close-form solution for easy problems like path finding is a good idea, but decision trees on their own are very gameable.

Now I agree that this endeavor would not improve the enjoyment of the game for most players, and if anything is more of an interesting research challenge. As oppose to starcraft/Dota the AI here can't hide its macro limitations behind surhumane micro. Civ is definitely a step closer in the direction of AGI in my opinion.
 
CNNs on their own would not be very useful indeed, but the general idea here would be to use some variation of reinforcement learning (Q-learning) to learn policies/strategies. Since the map is a regular grid, CNNs would be the right fit to learn an embedding (representation) of the game state to feed to some kind of huge recurrent neural network (most likely a LSTM).

To assist the AI with close-form solution for easy problems like path finding is a good idea, but decision trees on their own are very gameable.

Now I agree that this endeavor would not improve the enjoyment of the game for most players, and if anything is more of an interesting research challenge. As oppose to starcraft/Dota the AI here can't hide its macro limitations behind surhumane micro. Civ is definitely a step closer in the direction of AGI in my opinion.

You are right, but as you say this will be nice for research, probably poor for the game.

I think however that decission trees, while they may look Gamey. There are very good to implement macro strategies that work differently for different leaders. In other words, Gamey is a good thing here, cause it is a very good way to code personality.

The main idea is to solve each subproblem in the most efficient way possible. Global Strategies should be done with decission trees, tech progresion can be solved with basic graph exploration/search strategies, trade should use some very basic euristic approach, pathfinding trough any pathfinding algorithm, diplomacy and world congress can be also very simple to solve. City layout is also probably not that hard to do well, as cities can be mostly managed one at a time, with some macro objective selection added to the mix.

The only complex thing to do is combat. Admitedly, this is where a Machine Learning technique would shine. But honestly, combat is not so deep in CiV to not be able to approach it in different ways. And most importantly, this would require a guy just to code combat behaviour. We all know this is not likely to happen.
 
To be honest it doesn't really make sense before we first get a decent AI using conventional means.

It's not like we are at the stage where we need the AI to beat world champions like we did for Go and Chess and Starcraft. We just need it to look competent enough with less reliance on crazy bonuses like starting with 3 cities.
 
Top Bottom