I read that article TODAY
Such a thing has already been created for some video games. Michael Robbins used the concept of a genetic algorithm for supcom2.
Ternary vs Binary
What you have to realise is a humans runs not in binary, but in ternary.
The sky is blue.
One day, you'll wake up, you'll see that the sky is green.
The fact that the sky is blue is irrelevant. You can see for yourself that the sky is green.
In ternary, there are 3 values - true, false, and neither true nor false.
The human brain is constantly able to reassess its known facts. It is also able to synthesise information from many different sources to create new information.
IF X THEN Y
Let's take an example from the article I linked - let's say you're attacking my base in Age of Empires.
You attack my base. I have a weak point in my wall - specifically, the wall is not compete.
So you attack through the hole in my wall.
My entire army is defending the hole in my wall. Nothing else is defended.
You lose. In the future, you'll know not to attack the weak spots in my wall. You'll make your own hole.
The computer in this situation however, is hardcoded with "If X, then Y".
So once you've figured out that the computer will always attack the weakspot of your wall, you can always defend it. This is an exploit.
Genetic Algorithm/Historical Information
A genetic algorithm works by natural selection. It creates a genepool. Those who are successful succeed. Those that aren't, don't. The successful genes are passed to the next generation.
So in the above scenario, natural selection would favour AI's that didn't attack the hole in the wall. And eliminate the rest.
You see the problem with that approach? It's the same problem that is present in real-life biology.
Walking through the weakspot of the wall is actually a really good decision,
if I don't realise the weakspot is there
The other problem is that you have to run training sessions for the AI. This would be excellent if it was done by every player world wide, but often the AI is just pitted against other versions of itself.
This gives the AI the ability to learn from the player's style. Imagine an AI that could learn how to play the game from the best players.
Emergent AI
Emergent AI is unpredictable. It doesn't always do what is best, or what is optimal. It's creative and does its own thing.
He means an emergent AI in the sense of an unpredictable AI. Essentially, each individual has its own intelligence, and does what is best for itself but takes into account what he larger group is doing. It has a subcommander so that it doesn't act completely randomly. .
With this approach the decisions that the AI could make are ranked, then it applies fuzzy logic to make its decision (if a decision is predictable, it's not the best decision - think targeting in CiV - always targeting damaged units)
The difficulty from the game comes because of the fuzzy logic. The easy AI's are more likely to make mistakes that novice players might make.
There's an argument that unpredictable, sub-optimal decisions are better in the long run than predictable, optimal decisions. It prevents the AI from being baited or trapped.
Basically, we're playing a strategy game, right?
Strategy is exploration, not exploitation. There is no such thing as "optimal" strategy. Any kind of "optimal" strategy demonstrates a balance discrepancy.
Of course, in strategy games there tends to be a favoured strategy, which creates favoured ways to counter it. But there is a very real difference between favoured strategy and optimal strategy.
There is no best technique. Decision trees can be used to fix problems that the learning AI is having. Emergence has an unpredictability that could be fatal. Genetic Algorithms can form a predictability that could be fatal.
https://www.youtube.com/watch?v=WXd6CQRTNek - this kind of illustrates the point of exploration and exploitation.
The biggest issues with the AI is the tactical AI. Although some behaviour to learn how the player manages to keep up with the AI on Deity could be useful