BaldSamson
Chieftain
- Joined
- Mar 8, 2021
- Messages
- 73
Yeah, I thought I saw that in the patch notes, but couldn't remember which difficulty that was. Thanks!IIRC it's warlord in VP that is the closest to a player.
Yeah, I thought I saw that in the patch notes, but couldn't remember which difficulty that was. Thanks!IIRC it's warlord in VP that is the closest to a player.
It's not like we want to not improve the AI. The AI has been already improved to a great degree. It's just very hard to improve it more. It's like with every skill. The better you are, the harder is to improve even more. Maybe reinforcement learning could help here. It's very good for Chess, Go, StarCraft 2 and Dota 2.The oft-disproven theory of human superiority in games.
I realize we don't have a general AI in this game that is learning from iteration, however, there is no reason to throw up our hands and concede. Experimentation could lead us to a more rewarding experience in the combat portion of the game, and I don't see any reason to accept status quo.
I agreeAlso, it is also not necessary for the AI to play as well as the human - even at this stage. We can focus on removing exploitative tactics and promotions that are advantageous primarily to human play. For instance, city turtling and combinations like (multiple attack per turn + heal on kill + extra movement) and (multiple attack per turn + attacking over mountains + extra range) that the AI rarely achieves.
Just to clarify: I'm against giving away free promotions to the AI, magical healing that mostly benefits humans, and silly combinations like crossbows firing over mountains. I'm for elite human units that are specialized, situationally effective, but still vulnerable. I like that it's possible to have a unit that picks up promotions over the course of the game and becomes extremely valuable, they just should be harder to deploy continuously.
The oft-disproven theory of human superiority in games.
I realize we don't have a general AI in this game that is learning from iteration, however, there is no reason to throw up our hands and concede. Experimentation could lead us to a more rewarding experience in the combat portion of the game, and I don't see any reason to accept status quo.
Also, it is also not necessary for the AI to play as well as the human - even at this stage. We can focus on removing exploitative tactics and promotions that are advantageous primarily to human play. For instance, city turtling and combinations like (multiple attack per turn + heal on kill + extra movement) and (multiple attack per turn + attacking over mountains + extra range) that the AI rarely achieves.
Just to clarify: I'm against giving away free promotions to the AI, magical healing that mostly benefits humans, and silly combinations like crossbows firing over mountains. I'm for elite human units that are specialized, situationally effective, but still vulnerable. I like that it's possible to have a unit that picks up promotions over the course of the game and becomes extremely valuable, they just should be harder to deploy continuously.
Really?Often disproven how? I can't think of any complicated game where the AI is even close to a human.
Vague? I provided 8 ideas to explore. I didn't say it would make the AI better, rather it would limit human tactics that the AI doesn't seem to understand how to take advantage of.This is all pretty vague you just want the AI to be better, like how?
My theory is that humans are much better at understanding how combinations of specific promotions can create overpowered units that unbalance the game.Humans stack promotions because they are much better at keeping their units alive, the extra promotions are almost a symptom of this rather than the issue itself.
I actually don't agree for your standard range and logistics type promotions, the AI is perfectly capable of using them. It just doesn't keep them like a human because its units die.My theory is that humans are much better at understanding how combinations of specific promotions can create overpowered units that unbalance the game.
I agree that for ranged units, indirect fire seems to be the lynchpin in human advantage. I see the AI pick +1 range and use it.I actually don't agree for your standard range and logistics type promotions, the AI is perfectly capable of using them. It just doesn't keep them like a human because its units die.
Ahh, now we're talking. There is a difference between good AI and fun AI, and we are all looking for our own flavor of balance between the two.And its important to note, we WANT the AI units to die. The developers are quite capable of making the AI as "timid" as a human, focused on defense and only taking offense when its highly unlikely the unit will die...but my god what a boring game that would be.
Really?
Both of these articles are about 3 years old and the techniques are still in the early stages. However, it's not hard to imagine a future where general AIs are taught how to evaluate individual games without much human interaction.
- https://www.vox.com/2019/4/13/18309418/open-ai-dota-triumph-og
- https://www.vox.com/future-perfect/...l-intelligence-google-deepmind-starcraft-game
Vague? I provided 8 ideas to explore. I didn't say it would make the AI better, rather it would limit human tactics that the AI doesn't seem to understand how to take advantage of.
My theory is that humans are much better at understanding how combinations of specific promotions can create overpowered units that unbalance the game.
Regarding military, the two things that I think are most over-powered are:
- pillaging;
- upgrading units with gold.
False. Both in Go and Chess doesn't require playing fast and AI beats the best players.Yeah no those games are nothing like Civ (at least in terms of why the AI wins). The AI wins because it has far better reflexes. It is not even slightly outplaying a human in terms of thinking, it is just much much faster. The starcraft alphago really struggled once they prevented it from doing super human things and in some cases things that were literally impossible for humans.
I await your and BaldSamson's contribution to @ilteroi's work with mild anticipation, since you seem to understand how exactly to improve the CIV AI in all the areas that it's failing in!So the conclusion is that if AI can be better than humans in strategy games because of it's intelligence instead of it's speed then it should be also possible to do an AI for civ. I'm not claiming that it's simple and easy, because the entire reinforcement learning field is not.
If I were to do that, I'll probably do a separate AI model for different aspects of the game, like tactics, what to produce etc. Tactics are interesting to start with. I could try to train an agent that tries to beat Iteroi's algorithms.I await your and BaldSamson's contribution to @ilteroi's work with mild anticipation, since you seem to understand how exactly to improve the CIV AI in all the areas that it's failing in!
Please use VC++2008 so it can be integrated if useful.I could try to train an agent that tries to beat Iteroi's algorithms
Yes, there are many different AIs that handle different parts of the game.If I were to do that, I'll probably do a separate AI model for different aspects of the game, like tactics, what to produce etc.
I haven't work in C++ for a looong time. Python is way better and easier for RL and ML in general. There are ways of communication between languages (like REST requests, by file etc.), though and I think this is the way to go.Please use VC++2008 so it can be integrated if useful.
Yeah, but the game logic uses C++ so...I haven't work in C++ for a looong time. Python is way better and easier for RL and ML in general. There are ways of communication between languages (like REST requests, by file etc.), though and I think this is the way to go.
So... ?Yeah, but the game logic uses C++ so...
There are ways of communication between languages (like REST requests, by file etc.)
False. Both in Go and Chess doesn't require playing fast and AI beats the best players.
In StarCraft 2 they limited on purpose the AI, so it's a bit slower than profesional players and have slower reaction, so the AI wins because of it's decision making instead of just speed. See: https://www.deepmind.com/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii section: How AlphaStar plays and observes the game. The AI won 5-0 against professional players.
Alpha GO didn't do any "super human thing". Every move it did was legal, so human also could do it. It just did a weird, unexpected move that surprised both the opponent and the authors and it won the game thanks to that. It did that, because it thought about that and human didn't.
Where did you get all the false information?
So the conclusion is that if AI can be better than humans in strategy games because of it's intelligence instead of it's speed then it should be also possible to do an AI for civ. I'm not claiming that it's simple and easy, because the entire reinforcement learning field is not.
Also, the AI doesn't need to be better than profesionals (Do civ5 even have professional players?). It's just have to be good enough to provide a challenge for players. Of course the better it is, the more players it would challenge, but it doesn't have to be 100%.
I'm all for it. Let me know when you have your improved AI code and I'll merge into master.The oft-disproven theory of human superiority in games.
I realize we don't have a general AI in this game that is learning from iteration, however, there is no reason to throw up our hands and concede. Experimentation could lead us to a more rewarding experience in the combat portion of the game, and I don't see any reason to accept status quo.