Humble opinion about mod

The oft-disproven theory of human superiority in games.

I realize we don't have a general AI in this game that is learning from iteration, however, there is no reason to throw up our hands and concede. Experimentation could lead us to a more rewarding experience in the combat portion of the game, and I don't see any reason to accept status quo.
It's not like we want to not improve the AI. The AI has been already improved to a great degree. It's just very hard to improve it more. It's like with every skill. The better you are, the harder is to improve even more. Maybe reinforcement learning could help here. It's very good for Chess, Go, StarCraft 2 and Dota 2.
Also, it is also not necessary for the AI to play as well as the human - even at this stage. We can focus on removing exploitative tactics and promotions that are advantageous primarily to human play. For instance, city turtling and combinations like (multiple attack per turn + heal on kill + extra movement) and (multiple attack per turn + attacking over mountains + extra range) that the AI rarely achieves.

Just to clarify: I'm against giving away free promotions to the AI, magical healing that mostly benefits humans, and silly combinations like crossbows firing over mountains. I'm for elite human units that are specialized, situationally effective, but still vulnerable. I like that it's possible to have a unit that picks up promotions over the course of the game and becomes extremely valuable, they just should be harder to deploy continuously.
I agree
 
Also when people say that the AI has trouble taking cities....I'm just not seeing that problem anymore:

Spoiler :

Screenshot.png



The AI is quite adapt at blockading cities, utilizing naval and land units together, bringing in a strong overwhelming force to hit the city hard. It may have to work through the army, but once the army is weakened, I see AIs just blitz cities like this and take them out pretty solidly.
This city as a note would fall 1 turn after the AI started its blockade. Once it brought in the full surround, it was over in a heartbeat.
 
The oft-disproven theory of human superiority in games.

I realize we don't have a general AI in this game that is learning from iteration, however, there is no reason to throw up our hands and concede. Experimentation could lead us to a more rewarding experience in the combat portion of the game, and I don't see any reason to accept status quo.

Also, it is also not necessary for the AI to play as well as the human - even at this stage. We can focus on removing exploitative tactics and promotions that are advantageous primarily to human play. For instance, city turtling and combinations like (multiple attack per turn + heal on kill + extra movement) and (multiple attack per turn + attacking over mountains + extra range) that the AI rarely achieves.

Just to clarify: I'm against giving away free promotions to the AI, magical healing that mostly benefits humans, and silly combinations like crossbows firing over mountains. I'm for elite human units that are specialized, situationally effective, but still vulnerable. I like that it's possible to have a unit that picks up promotions over the course of the game and becomes extremely valuable, they just should be harder to deploy continuously.

Often disproven how? I can't think of any complicated game where the AI is even close to a human.

This is all pretty vague you just want the AI to be better, like how?

Humans stack promotions because they are much better at keeping their units alive, the extra promotions are almost a symptom of this rather than the issue itself.
 
Often disproven how? I can't think of any complicated game where the AI is even close to a human.
Really?
Both of these articles are about 3 years old and the techniques are still in the early stages. However, it's not hard to imagine a future where general AIs are taught how to evaluate individual games without much human interaction.
This is all pretty vague you just want the AI to be better, like how?
Vague? I provided 8 ideas to explore. I didn't say it would make the AI better, rather it would limit human tactics that the AI doesn't seem to understand how to take advantage of.
Humans stack promotions because they are much better at keeping their units alive, the extra promotions are almost a symptom of this rather than the issue itself.
My theory is that humans are much better at understanding how combinations of specific promotions can create overpowered units that unbalance the game.
 
My theory is that humans are much better at understanding how combinations of specific promotions can create overpowered units that unbalance the game.
I actually don't agree for your standard range and logistics type promotions, the AI is perfectly capable of using them. It just doesn't keep them like a human because its units die.

And its important to note, we WANT the AI units to die. The developers are quite capable of making the AI as "timid" as a human, focused on defense and only taking offense when its highly unlikely the unit will die...but my god what a boring game that would be.
 
I actually don't agree for your standard range and logistics type promotions, the AI is perfectly capable of using them. It just doesn't keep them like a human because its units die.
I agree that for ranged units, indirect fire seems to be the lynchpin in human advantage. I see the AI pick +1 range and use it.
And its important to note, we WANT the AI units to die. The developers are quite capable of making the AI as "timid" as a human, focused on defense and only taking offense when its highly unlikely the unit will die...but my god what a boring game that would be.
Ahh, now we're talking. There is a difference between good AI and fun AI, and we are all looking for our own flavor of balance between the two.
 
Really?
Both of these articles are about 3 years old and the techniques are still in the early stages. However, it's not hard to imagine a future where general AIs are taught how to evaluate individual games without much human interaction.

Vague? I provided 8 ideas to explore. I didn't say it would make the AI better, rather it would limit human tactics that the AI doesn't seem to understand how to take advantage of.

My theory is that humans are much better at understanding how combinations of specific promotions can create overpowered units that unbalance the game.

Yeah no those games are nothing like Civ (at least in terms of why the AI wins). The AI wins because it has far better reflexes. It is not even slightly outplaying a human in terms of thinking, it is just much much faster. The starcraft alphago really struggled once they prevented it from doing super human things and in some cases things that were literally impossible for humans.

And yes vague. You threw out a bunch of ideas with little reasoning behind them. My guess would be the removing the best high level promotions +1range/+1attack/+1move and combat heals in exchange for the AI not getting extra exp would make the game overall easier. I think you are wildly overstating how much these promotions matter. Humans have a huge number of little and big advantages of the AI due to better thought patterns and I'd put grinding out high level units and getting +1 range in the small advantages group rather than the big advantages group.
 
Regarding military, the two things that I think are most over-powered are:
  • pillaging;
  • upgrading units with gold.
 
Regarding military, the two things that I think are most over-powered are:
  • pillaging;
  • upgrading units with gold.

Yeah these are both pretty huge. And the AI is pretty bad at upgrading (although it does seem to be getting better?)
 
Yeah no those games are nothing like Civ (at least in terms of why the AI wins). The AI wins because it has far better reflexes. It is not even slightly outplaying a human in terms of thinking, it is just much much faster. The starcraft alphago really struggled once they prevented it from doing super human things and in some cases things that were literally impossible for humans.
False. Both in Go and Chess doesn't require playing fast and AI beats the best players.

In StarCraft 2 they limited on purpose the AI, so it's a bit slower than profesional players and have slower reaction, so the AI wins because of it's decision making instead of just speed. See: https://www.deepmind.com/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii section: How AlphaStar plays and observes the game. The AI won 5-0 against professional players.

Alpha GO didn't do any "super human thing". Every move it did was legal, so human also could do it. It just did a weird, unexpected move that surprised both the opponent and the authors and it won the game thanks to that. It did that, because it thought about that and human didn't.

Where did you get all the false information?

So the conclusion is that if AI can be better than humans in strategy games because of it's intelligence instead of it's speed then it should be also possible to do an AI for civ. I'm not claiming that it's simple and easy, because the entire reinforcement learning field is not.

Also, the AI doesn't need to be better than profesionals (Do civ5 even have professional players?). It's just have to be good enough to provide a challenge for players. Of course the better it is, the more players it would challenge, but it doesn't have to be 100%.
 
So the conclusion is that if AI can be better than humans in strategy games because of it's intelligence instead of it's speed then it should be also possible to do an AI for civ. I'm not claiming that it's simple and easy, because the entire reinforcement learning field is not.
I await your and BaldSamson's contribution to @ilteroi's work with mild anticipation, since you seem to understand how exactly to improve the CIV AI in all the areas that it's failing in!
 
I await your and BaldSamson's contribution to @ilteroi's work with mild anticipation, since you seem to understand how exactly to improve the CIV AI in all the areas that it's failing in!
If I were to do that, I'll probably do a separate AI model for different aspects of the game, like tactics, what to produce etc. Tactics are interesting to start with. I could try to train an agent that tries to beat Iteroi's algorithms.

I do ML for a living and RL as a hobby. I've play with some toy tasks and Atari games: https://github.com/CppMaster/openai-gym-playground and I've managed to beat the hardest difficulty of StarCraft 2: https://github.com/CppMaster/SC2-AI
Of course, my AI model is way simpler than AlphaStar and would struggle against good players, because I'm just one person doing it after hours on my PC as opposed to an entire experienced team that were doing it for several months on a cluster of very powerful machines :p
 
Please use VC++2008 so it can be integrated if useful.
I haven't work in C++ for a looong time. Python is way better and easier for RL and ML in general. There are ways of communication between languages (like REST requests, by file etc.), though and I think this is the way to go.
 
I haven't work in C++ for a looong time. Python is way better and easier for RL and ML in general. There are ways of communication between languages (like REST requests, by file etc.), though and I think this is the way to go.
Yeah, but the game logic uses C++ so...
 
False. Both in Go and Chess doesn't require playing fast and AI beats the best players.

In StarCraft 2 they limited on purpose the AI, so it's a bit slower than profesional players and have slower reaction, so the AI wins because of it's decision making instead of just speed. See: https://www.deepmind.com/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii section: How AlphaStar plays and observes the game. The AI won 5-0 against professional players.

Alpha GO didn't do any "super human thing". Every move it did was legal, so human also could do it. It just did a weird, unexpected move that surprised both the opponent and the authors and it won the game thanks to that. It did that, because it thought about that and human didn't.

Where did you get all the false information?

So the conclusion is that if AI can be better than humans in strategy games because of it's intelligence instead of it's speed then it should be also possible to do an AI for civ. I'm not claiming that it's simple and easy, because the entire reinforcement learning field is not.

Also, the AI doesn't need to be better than profesionals (Do civ5 even have professional players?). It's just have to be good enough to provide a challenge for players. Of course the better it is, the more players it would challenge, but it doesn't have to be 100%.

False ? Like what part of the thing I said was false? I was talking about some group of games and then you applied it to another set of games. I didn't mention those games because he didn't mention them. I'd have an entirely different rebuttal, but I don't make rebuttals to point the other person doesn't make....


I found this "false" information by watching the videos put out about alphastar. This one for example.


It is better than at the start with limited APM and not microing stalkers across multiple screens but it still spikes to 1500 apm in battles, something no human can do. 5:50 timestamp.

The article you links is also really weird as it says alphastar beat mana 5-0 then it links to a video where it doesn't. So I collected this "false" info by having already seen these videos and remember what happened.
 
The oft-disproven theory of human superiority in games.

I realize we don't have a general AI in this game that is learning from iteration, however, there is no reason to throw up our hands and concede. Experimentation could lead us to a more rewarding experience in the combat portion of the game, and I don't see any reason to accept status quo.
I'm all for it. Let me know when you have your improved AI code and I'll merge into master. :)

Until then, though, the AI is demonstrably not as intelligent as humans are, and needs additional help to be challenging. That the AI could hypothetically be way better in the future is no reason for us not to improve the game balance now.
 
One of the biggest problems with the tactical AI (unless I'm greatly mistaken) is that it doesn't remember the positions of units. If a unit vanishes into tiles that aren't visible, the AI forgets about it, while a human can recognize where that unit likely went and adapt accordingly. This is why it gets extra sight on Immortal and Deity.

The tactical AI works similarly to a chess AI, in that the "combat sim" runs through different possibilities and selects the one which it thinks will result in the best outcome. It has a maximum search depth for the sake of performance, however, so turn times don't take forever.
 
Top Bottom