MADDJINN GAMES, AI problems

It depends on whether the pre-release build is being patched every time there's a code modification. If not then Maddjinn's build is also outdated and the release on Friday could have changes that we haven't witnessed yet because they haven't been implemented in any of the pre-release builds.

Also, apparently Pete Murray stated on Twitter that the AI CAN move and shoot now so either he's blatantly lying to reassure us or they really have improved it, and we just haven't seen the improvements yet because as I already mentioned, they haven't been added to the version that Maddjinn is playing.

Either way I'm sure the developers will get a lot of questions regarding the AI in tomorrow's AMA so we'll see what they say then.

What I'm worried most about is will the AI know when a player is pursuing an Affinity victory and they're in the middle of completing their warp gate? Will the AI know to attack the player at that point or are they just going to sit back while the Supremacy/Purity/Harmony player fulfills their victory conditions?
 
The boat thing also shows the devs don't know how to play to their AI's own meager strengths.

If the devs did not have a plan in place to improve ranged unit decision-making (the human ranged unit rule is fairly simple, move and shoot something always unless in full retreat, but lets move on), then why did they fill the ocean with one-shottable ranged units? The answer is they aren't familiar with their own AIs weaknesses and strengths in the first place. To actually make every boat essentially a submarine, the least threatening AI unit in CiV... they haven't played enough CiV.

In CiV the biggest naval threat always came from AI's city-state allies, because they had 100% privateer navies, and because they were where you didn't expect them to be and could kill your TRs fastest. (It's actually almost tactically impossible to sink a 5-privateer navy without losing a boat of your own unless you have double-shot frigates already - now imagine 10-privateer AI navies.)

In fact, a city-state navy on your side was also really good at taking AI cities if you wanted to let it. Imagine that! Yet here in BE we have full-on AI players struggling to capture defenseless stations. If you don't give the AI the tools of its own destruction (ranged units) it won't self-destruct.

In BE the devs had a choice - start with naval units the AI knows how to use, or start with the ones it doesn't - and they made the wrong choice.

If the gunboat line were melee-only the AI would be dangerous on the water with no new programming needed.
 
@Strategist83 that is a good post. The AI exhibits the same problems as in Civ V.

I've had the suspicion that this is a symptom of 1UPT.

It is the symptom of the fact that programming an AI is a monumentally difficult task... and yes, a 1 UPT system is obviously more demanding than allowing stacking. However, if anybody should up to a monumentally difficult task it is Firaxis with their considerable monetary backing. What we've seen so far suggests little in the way of improvement and a carelessness and negligence when it comes to facing the immediate problems of the AI. That seems very sloppy for a 50€ flagship product, considering they didn't have to reinvent the wheel this time and had the Civ V base code to build on. The OP's concern that these issues will still be present in a new game are merited, demands to have such criticism quelled are not.
 
As much as I am sceptical of the whole 1UPT system, I still think that it can work well. But you have to give the AI enough space. I played the ACW scenario in BNW and noticed that the AI was actually *much* better than in a regular game. I even saw proper use of CAV units to flank one of my armies and rear-attack action that forced me to alter my battle line.

But why is that? Well, my bet is on the fact that the scenario uses a different scope - cities are 5 to 10 hexes apart and the terrain inbetween them is much more homogenous with less obstacles the AI has to maneuver around. And indeed: In the Appalachian Mountain region with its narrow corridors the AI showed the same confused "shuffle around" behaviour you see in a normal game.
 
The boat thing also shows the devs don't know how to play to their AI's own meager strengths.

If the devs did not have a plan in place to improve ranged unit decision-making (the human ranged unit rule is fairly simple, move and shoot something always unless in full retreat, but lets move on), then why did they fill the ocean with one-shottable ranged units? The answer is they aren't familiar with their own AIs weaknesses and strengths in the first place. To actually make every boat essentially a submarine, the least threatening AI unit in CiV... they haven't played enough CiV.

In CiV the biggest naval threat always came from AI's city-state allies, because they had 100% privateer navies, and because they were where you didn't expect them to be and could kill your TRs fastest. (It's actually almost tactically impossible to sink a 5-privateer navy without losing a boat of your own unless you have double-shot frigates already - now imagine 10-privateer AI navies.)

In fact, a city-state navy on your side was also really good at taking AI cities if you wanted to let it. Imagine that! Yet here in BE we have full-on AI players struggling to capture defenseless stations. If you don't give the AI the tools of its own destruction (ranged units) it won't self-destruct.

In BE the devs had a choice - start with naval units the AI knows how to use, or start with the ones it doesn't - and they made the wrong choice.

If the gunboat line were melee-only the AI would be dangerous on the water with no new programming needed.
Basically, what I see is them trying out new things. Sometimes it works, sometimes it doesn't. Dumbing down 80% of naval units because the AI can handle melee combat for sure is taking the easy route and calling it quits.

It seems some of you guys are reading too much into a single pre-release build, when in truth development is always in flux: it's always iterating and has all sorts of complexities. Sometimes you fix one thing and break two. The boat issue might not have been there in the previous day's build and was fixed in the next's, after that particular review build was churned out.

That's not to say the release version will be perfect, but it's plain foolish to assume Firaxians either don't care to work on their game or are simply inept. Despite all sorts of claims about "obvious" solutions, it's all outsider knowledge and speculation.
 
Except a Lot of those issues were present in CivV

And they just copied some of them over.
Either in AI (poor ranged combat, not using troops) or in mechanics (ships one-shotting each other)
 
Except a Lot of those issues were present in CivV

And they just copied some of them over.
Either in AI (poor ranged combat, not using troops) or in mechanics (ships one-shotting each other)

Ships didn't one-shot each other in Civ 5 like this. You had the occasional submarine with wolfpack promotions able to one-shot stuff, but you didn't have a frigate shooting a frigate and sinking it in a single attack.

The mechanics of naval combat are new - they clearly didn't expect ships to be using their melee strength to resist ranged attacks or they'd have made it higher. Should be an easy fix.
 
I read that article TODAY :)

Such a thing has already been created for some video games. Michael Robbins used the concept of a genetic algorithm for supcom2.

Ternary vs Binary

What you have to realise is a humans runs not in binary, but in ternary.

The sky is blue.

One day, you'll wake up, you'll see that the sky is green.

The fact that the sky is blue is irrelevant. You can see for yourself that the sky is green.

In ternary, there are 3 values - true, false, and neither true nor false.

The human brain is constantly able to reassess its known facts. It is also able to synthesise information from many different sources to create new information.


IF X THEN Y


Let's take an example from the article I linked - let's say you're attacking my base in Age of Empires.

You attack my base. I have a weak point in my wall - specifically, the wall is not compete.

So you attack through the hole in my wall.

My entire army is defending the hole in my wall. Nothing else is defended.

You lose. In the future, you'll know not to attack the weak spots in my wall. You'll make your own hole.

The computer in this situation however, is hardcoded with "If X, then Y".

So once you've figured out that the computer will always attack the weakspot of your wall, you can always defend it. This is an exploit.

Genetic Algorithm/Historical Information
A genetic algorithm works by natural selection. It creates a genepool. Those who are successful succeed. Those that aren't, don't. The successful genes are passed to the next generation.

So in the above scenario, natural selection would favour AI's that didn't attack the hole in the wall. And eliminate the rest.

You see the problem with that approach? It's the same problem that is present in real-life biology.

Walking through the weakspot of the wall is actually a really good decision, if I don't realise the weakspot is there

The other problem is that you have to run training sessions for the AI. This would be excellent if it was done by every player world wide, but often the AI is just pitted against other versions of itself.

This gives the AI the ability to learn from the player's style. Imagine an AI that could learn how to play the game from the best players.

Emergent AI

Emergent AI is unpredictable. It doesn't always do what is best, or what is optimal. It's creative and does its own thing.

He means an emergent AI in the sense of an unpredictable AI. Essentially, each individual has its own intelligence, and does what is best for itself but takes into account what he larger group is doing. It has a subcommander so that it doesn't act completely randomly. .

With this approach the decisions that the AI could make are ranked, then it applies fuzzy logic to make its decision (if a decision is predictable, it's not the best decision - think targeting in CiV - always targeting damaged units)

The difficulty from the game comes because of the fuzzy logic. The easy AI's are more likely to make mistakes that novice players might make.

There's an argument that unpredictable, sub-optimal decisions are better in the long run than predictable, optimal decisions. It prevents the AI from being baited or trapped.


Basically, we're playing a strategy game, right?

Strategy is exploration, not exploitation. There is no such thing as "optimal" strategy. Any kind of "optimal" strategy demonstrates a balance discrepancy.

Of course, in strategy games there tends to be a favoured strategy, which creates favoured ways to counter it. But there is a very real difference between favoured strategy and optimal strategy.

There is no best technique. Decision trees can be used to fix problems that the learning AI is having. Emergence has an unpredictability that could be fatal. Genetic Algorithms can form a predictability that could be fatal.

https://www.youtube.com/watch?v=WXd6CQRTNek - this kind of illustrates the point of exploration and exploitation.

The biggest issues with the AI is the tactical AI. Although some behaviour to learn how the player manages to keep up with the AI on Deity could be useful

No, humans do not run in ternary. In fact, ternary logic is not that much complicated than binary, we had the technology to make ternary computers (and not just ternary) when we had the technology to make binary computers. It's just that binary computers are easier to implement and cheaper to build. Human logic, however is not based on any discrete number of "truth" values. For a given statement, humans can assign any degree of "truthness" to it since human logic is exclusively modal and is realised by neural networks. A neural network is basically a bunch of independent nodes (neurons) each performing its own (often rather simple) tasks (functions, somas) that pass the results (axons), via a non-linear transformation functon to other nodes as arguments (dendrites) for their functions. This information is then aggregated and a final conclusion is drawn from it. And yes, we can also implement (though on a much simpler scale) neural networks (artificial neural networks or ANNs) using computers. In fact, any kind of facial recognition software is likely to have an implementation of a neural network running in the background.

The main question is, if we can do all that, why don't we use it in games? Namely because poeple who know how to do that are often people with Ph.D.s in computer sciences and they have better job opportunities than to work for a medium sized game company (i.e. working for companies such as Facebook, Apple, Google, government agencies, research labs and so on). The other reason why we'll rarely see a neural network implemnted as an AI core is that it takes a long time to design and teach such a system to do what we want it to do. Furthermore, it is a consensus in the game industry that it is not "fun" for the player to play against the good AI, but aginst one that makes it appear to the player that they must make a choice to counter the AI, i.e. an AI that interacts with the player and represents more of an obstacle than an actual competitor. In some cases, the AI is not even designed to win the game, but to increase the player's sense of achievement when the player eventually wins the game.
 
No, humans do not run in ternary. In fact, ternary logic is not that much complicated than binary, we had the technology to make ternary computers (and not just ternary) when we had the technology to make binary computers. It's just that binary computers are easier to implement and cheaper to build. Human logic, however is not based on any discrete number of "truth" values. For a given statement, humans can assign any degree of "truthness" to it since human logic is exclusively modal and is realised by neural networks. A neural network is basically a bunch of independent nodes (neurons) each performing its own (often rather simple) tasks (functions, somas) that pass the results (axons), via a non-linear transformation functon to other nodes as arguments (dendrites) for their functions. This information is then aggregated and a final conclusion is drawn from it. And yes, we can also implement (though on a much simpler scale) neural networks (artificial neural networks or ANNs) using computers. In fact, any kind of facial recognition software is likely to have an implementation of a neural network running in the background.

The main question is, if we can do all that, why don't we use it in games? Namely because poeple who know how to do that are often people with Ph.D.s in computer sciences and they have better job opportunities than to work for a medium sized game company (i.e. working for companies such as Facebook, Apple, Google, government agencies, research labs and so on). The other reason why we'll rarely see a neural network implemnted as an AI core is that it takes a long time to design and teach such a system to do what we want it to do. Furthermore, it is a consensus in the game industry that it is not "fun" for the player to play against the good AI, but aginst one that makes it appear to the player that they must make a choice to counter the AI, i.e. an AI that interacts with the player and represents more of an obstacle than an actual competitor. In some cases, the AI is not even designed to win the game, but to increase the player's sense of achievement when the player eventually wins the game.

Taking off my mod hat and putting on my profession hat just to note that in the US, a PHD in Computer Science doesn't actually buy much unless you want to work as a combined professor researcher at a major university. Those programming for a living here mostly have Bachelor of Science degrees with a few people with Masters. The first paragraph above is entirely correct though, and it's also the case that the more skilled programmers are likely to be at other companies, it's just that those not interested in becoming professors stop their education earlier.

In addition, things like making AI Neutral Networks is still mostly at the research level and so we can't expect gaming companies to do this.
 
In addition, things like making AI Neutral Networks is still mostly at the research level and so we can't expect gaming companies to do this.

On one hand, I want to protest that neural networks are, in fact, incredibly simple to code. I made a neural network for a computer science project in an introductory AI course back when I was at university, to do image recognition. Write a node object, hook them up to a bunch of inputs, get the output. Easy peasy! It was only four nodes, as I recall, although the project was more "which direction is the person looking" rather than actual image identification.

Which is the flip side of the issue -- how the heck would you code a neural network to play a strategy game? A neural network is basically a black box that you hook a bunch of inputs up to, and get an output. Given the incredible range of choices that have to be made in any particular Civ game, how would a single neural network be able to accomplish it all?

It reminds me of one of the sayings I read in the textbook for my AI course -- "neural networks are the second best solution to any problem". With the corollary "Genetic algorithms are the third." Basically, sure, neural networks are incredibly adaptive -- but they're also terrible at a wide range of problems, and you can nearly always find a more specific solution that works better. For example, neural networks tend to work best with inputs that range over a continuous range of real numbers (for example, temperatures) as opposed to discrete values like integers. If you want to make a decision based on something that uses discrete values, you're better off using a decision tree or something like that.

Then you've got the issue that any learning algorithm typically needs to be trained over an enormous number of cycles. I think my old image recognition net needed something like 10,000 cycles to learn a basic thing like "which direction is the person looking?" TD Gammon, one of the more remarkable game-playing learning algorithms, took something like 1,500,000 cycles before it began to level off in its playing skill. Which is why I suspect the idea of getting a bunch of skilled players to play against an AI to train it is a rather silly proposition -- how much time would it take to get skilled players to play millions of games against the AI?

Artificial intelligence is an incredibly complex subject. I don't pretend to be an expert at it, but even from what little I do know from back in my university days points towards the task of making the AI for a strategy game to be an enormous task. Consider that an AI doesn't actually know anything about how to play the game -- it's a blank slate. Writing an AI is akin to going to an expert player of the game and asking them to write a detailed set of instructions to handle every possible circumstance that would allow a complete novice, whose never touched a Civ game before in their life, to play at the level of an expert like themselves. Even a basic task, like "which direction do I move my Warrior on the first turn?" involves a heck of a lot of variables. Determining the build order for new units, determining when and where to build new cities, determining when to go to war, and so on, would take a lot of factors to consider.

I'm amazed that the Civ AI is as good as it is, given the challenges involved in programming algorithms to make all of these decisions. Human beings have the advantages of massively parallel processors that have been learning to make decisions for decades. It's not terribly surprising that computers are still trying to catch up to us.
 
Basically, what I see is them trying out new things. Sometimes it works, sometimes it doesn't. Dumbing down 80% of naval units because the AI can handle melee combat for sure is taking the easy route and calling it quits.

So in other words, we should surround the problem with all our ships, not take any easy shots before the turn ends, and hope for the best?

Ranges ships aren't new. Nor are one-shotting ones (extensively demonstrated with submarines). They're vanilla CiV. Melee ships were new with G&K and they, just as you advocate, did work.
 
So in other words, we should surround the problem with all our ships, not take any easy shots before the turn ends, and hope for the best?
Nope. What you saw was the result of something behind the scenes going wrong. Not the intended effect. Come on.

Ranged naval combat is being tweaked. Things break during tweaking, especially during development. Isn't that a more sensible deduction?

And only thoroughly promoted submarines could reliably one-shot strong targets in Civ5.
 
I think it is a bit of a stretch to ask a game developer to develop a PvE AI which plays at the same potential level as another human being in PvP. Of course, we all want a challenge...but we might want to make more reasonable/acheivable AI requests too.

Well I know a couple of people who cant beat prince in Civ4 and 5, so there is that.
 
Then you've got the issue that any learning algorithm typically needs to be trained over an enormous number of cycles. I think my old image recognition net needed something like 10,000 cycles to learn a basic thing like "which direction is the person looking?" TD Gammon, one of the more remarkable game-playing learning algorithms, took something like 1,500,000 cycles before it began to level off in its playing skill. Which is why I suspect the idea of getting a bunch of skilled players to play against an AI to train it is a rather silly proposition -- how much time would it take to get skilled players to play millions of games against the AI?

I think we should just settle on the fact that using genetic algorithms to improve a Civ AI is just not feasible nor advisable.

The best you could do is making AI play among themselves, but the result is not guaranteed to be actually good against human players.

BTW I think that creating a genetic algorithm to create an optimal AI for Civ V would require a lot more work than what it'd be needed to simply fix the main blatant flaws that we all dislike in the current AI's behavior.
 
I think that is very harsh and unfair. "junk" implies no value whatsoever. I think thousands of gamers will find plenty of value in CivBE even if they never play MP. There is certainly a lot to give value to CivBE from the new affinities, the quests, the building choices, the tech web, and the covert ops etc.

Yes. I agree. Saying "junk" is a bit too harsh, but I was very disappointed seeing that after four years they did not fix some basic bugs in AI.
I do not know how some people are not bothered by lousy AI, and watching it commit suicide every time when declares war.

That is why developers avoided to say anything about AI of new CivBE. They just want to sell the game.
They make every new Civilization game more complicated, and it seems AI is not even close to handle it.

The point is....
If community does not say anything about it, next Civilization VI will have the same or even bigger problems with AI.
 
The sad truth is that Firaxis is years behind when it comes to AI. The AI that they use for CivBE is the same as in Civ 5, and that code was old (primitive) even when Civ 5 was released. They have tried to tweak it by changing its parameters, when in reality they should have a new AI system made from scratch to catch up to the competition.

Firaxis is getting dangerously behind by rehashing old code over and over again and calling it "new".
 
@Strategist83 that is a good post. The AI exhibits the same problems as in Civ V.

I've had the suspicion that this is a symptom of 1UPT.

And you'd be right in that suspicion. Here's Jon Shafer talking about the combat in Civ V:

This was a model very much inspired by the old wargame Panzer General. On the whole, I would say that the combat mechanics are indeed better in Civ 5 than in any other entry in the series. But as is the theme of this article, there's a downside to consider as well.

One of the biggest challenges unearthed by 1UPT was writing a competent combat AI. I wasn't the one who developed this particular AI subsystem, and the member of the team who was tasked with this did a great job of making lemonade out of the design lemons I'd given him. Needless to say, programming an AI which can effectively maneuver dozens of units around in extremely tactically-confined spaces is incredibly difficult.

The previous games didn't exactly have stellar AI either (although I have to say, playing SMAC recently, that the AI seems better in that than in Civ V. I was running as Gaians and had an army composed solely of psi units, and the Hive, who I was fighting, started mass producing units with psi defences in response.) But as Shafer points out, by using the example of Panzer General, which he based the combat in Civ V on:
The reason why this wasn't an issue in Panzer General was that their AI didn't actually need to do anything. It was always on the defensive, and a large part of that game was simply solving the "puzzle" of how to best crack open enemy strongholds. It was plenty sufficient if your opponents simply ordered a single tank to stir up some trouble every so often.

Sound familiar? Taking cities in Civ IV was basically the same; bring enough units to crack those giant stacks in cities. The AI didn't need to be good to do that.

But when you have IUPT, on a giant map that changes every single time someone plays the game, the tactical decisions become so much more complicated. The AI never had to worry about say, unit placement before. Unit stack, move towards city, attack. Maybe encode something to make them avoid negative defense tiles, like swamps.

Shafer didn't even think it was possible:

So is there a way to make 1UPT really work in a Civ game? Perhaps. The key is the map. Is there enough of room to stash units freely and slide them around each other? If so, then yes, you can do it. For this to be possible, I'd think you would have to increase the maximum map size by at least four times. You'd probably also want to alter the map generation logic to make bottlenecks larger and less common. Of course, making the world that much bigger would introduce a whole new set of challenges!

In fact, there were technical reasons this wasn't really feasible - our engine was already pushing up against the capabilities of modern computer hardware. Drawing that many small doo-dads on a screen is really expensive, trust me.

BE certainly doesn't have maps that big (in fact, according to Shafer, that wouldn't be possible). So the same problems will exist.

https://www.kickstarter.com/projects/jonshafer/jon-shafers-at-the-gates/posts/404789 for the full essay.
 
Just like with Sid five I plan to be playing this game against humans primarily. Why you might ask? because the AI is very stupid on every difficulty setting. Doing this should eliminate your AI concerns.

Assuming that the game launches with multiplayer intact I will not be playing single player for quite some time.

It's also really fun to play against humans when everybody is on equal footing and have no idea what to do. Everyone is going to be a newbie including myself, should be great.
 
No game can ever be perfect on release. But that is ok because thanks to the internet, games can be patched very quickly with a simple automatic download. I am sure BE will have a couple patches that will fix balance issues and improve the AI. Don't panic!
 
Since I have already established my lack of knowledge, pardon me if question turns out to be silly.

How much work/effort is it to add algorythm that checks if AI can take the city or station? This behviour is the one that bugs me the most - ranged units sitting around and shelling a target that they can't take, even after reducing HP to 0.

I could only imagine that it is relatively simple, at least compared to other elements that go into Civ game AI, to have AI check if there is melee unit it controls within certain range and if there isn't one to move range units outside the city bombardment range (or with stations - to move units "overkilling" the regen and have only enough units to keep HP at 0 while melee gets moving).

Since MadDjinn's LPs show that this is not the case, can someone maybe explain what kind of work would actually go into this?
 
Top Bottom