[GS] Online Speed Games

Lily_Lancer

Deity
Joined
May 25, 2017
Messages
2,387
Location
Berkeley,CA
I just realized that in GS it seems that online speed in Deity is somehow balanced now.

After a series of games I find myself always around feudalism/stirrup at T50, no matter vs Deity enemy, or vs other players in MP. However deity enemies may get musket or even pike & shot.

Even chopping does not provide very much advantage, time for workers to move and Magnus to transfer is the same as standard speed, however all others(tech speed, production rate, etc) are doubled.

When you start to attack enemy they produce units very quickly like in Civ4, that every step of your army is harmful.


Maybe just because I'm not very familiar with Online Speed Games. But from these experiences I start to think that the designers are balancing based on online speed instead of standard.
 
I just realized that in GS it seems that online speed in Deity is somehow balanced now.

After a series of games I find myself always around feudalism/stirrup at T50, no matter vs Deity enemy, or vs other players in MP. However deity enemies may get musket or even pike & shot.

Even chopping does not provide very much advantage, time for workers to move and Magnus to transfer is the same as standard speed, however all others(tech speed, production rate, etc) are doubled.

When you start to attack enemy they produce units very quickly like in Civ4, that every step of your army is harmful.


Maybe just because I'm not very familiar with Online Speed Games. But from these experiences I start to think that the designers are balancing based on online speed instead of standard.
I might be reading too much/over-projecting but if I were in charge of civ from a product position, the worst feedback I'd get is on AI, the harshest discussion on 1-upt. Both of those are solved in antagonistic MP. 1-upt is much more fun for humans. So I'd put a lot of effort promoting the MP experience.

This is purely from a cold perspective. My own personal affect on civ6 is still bent toward a sandbox game or cooperative MP with my wife (my seven years old is starting to play too so my personal use of civ is not gonna change anytime soon)
 
I might be reading too much/over-projecting but if I were in charge of civ from a product position, the worst feedback I'd get is on AI, the harshest discussion on 1-upt. Both of those are solved in antagonistic MP. 1-upt is much more fun for humans. So I'd put a lot of effort promoting the MP experience.

This is purely from a cold perspective. My own personal affect on civ6 is still bent toward a sandbox game or cooperative MP with my wife (my seven years old is starting to play too so my personal use of civ is not gonna change anytime soon)

The problem is that modern development on Ai is very different from Civ game AIs, and need very hard work to be applied to Civ Games. Meaning good Ai need a lot of work.

However, modern development on AI make AI developers much more expensive, Google pay $1,000,000 per AI scientist per year. Deepmind spent $10 billion on development of Alphago, such high investments
cannot be covered by sale.

For me, I'd like to pay $1,000 on an AI expansion which provide AIs as good as AlphaGo, or maybe $200 for an AI that can beat me on a fair Civ6 game. But how many players will pay is a problem. You know, most Civ6 players do not even complete a Civ6 game after their purchase! So expansions with new features are encouraged since these players (or payers) pay for features. While the development of advanced AI is limited since nobody will pay for them.
 
Last edited by a moderator:
Yep. It became cheaper to use players as a tool for challenge.
Without any boasting, I can say I'm an expert in machine learning. Twenty years in the field gives me the right to say it straight.
While I do think that using ML for civ AI would be a nice solution, the cost + the required expertise is quite high.
You need to evangelize getting data from game logs sent to you by the community, seeing the scandals that followed after they set up tool to gather data last year, it 's not a simple task. Then you' d need the infrastructure to support the collection and preparation of data (a dataiku based infra), then you need a guy good enough to devise where you could devise models to help. Build a realistic roadmap around it. And then you can start. At this point, before you 'd have written zero line of code already a few hundred thousands of dollars are spent.
 
Yep. It became cheaper to use players as a tool for challenge.
Without any boasting, I can say I'm an expert in machine learning. Twenty years in the field gives me the right to say it straight.
While I do think that using ML for civ AI would be a nice solution, the cost + the required expertise is quite high.
You need to evangelize getting data from game logs sent to you by the community, seeing the scandals that followed after they set up tool to gather data last year, it 's not a simple task. Then you' d need the infrastructure to support the collection and preparation of data (a dataiku based infra), then you need a guy good enough to devise where you could devise models to help. Build a realistic roadmap around it. And then you can start. At this point, before you 'd have written zero line of code already a few hundred thousands of dollars are spent.

Actually you don't need data, you only need a strong enough gameplay simulator (which shall not be as slow as the in game simulator) that you can do self-play and reinforcement learning.

The problem is that you also have to re-train the whole AI patch by patch, that's also costly.
 
Yeah well, I would not start without actual games history. Reinforcement from self generated data is tricky but the worst is on the generator in the case of civ. We are slipping into expert debate, but you are mickmicking the alpha go learning method, learning the game 'as a whole', I do not feel this is the right approach at all. Really not. Even then alpha go training started from history before generating examples.
In the case of civ I'd break the various cognitive tasks and train that many module . Maybe a big factorisation will come to be later on. It's definitely not like GO where the complexity can be broken down in an holistic myriad of the same atomic action.
But splitting tactic, strategy, economy into various tasks is intuitively where I'd go.
 
Yeah well, I would not start without actual games history. Reinforcement from self generated data is tricky but the worst is on the generator in the case of civ. We are slipping into expert debate, but you are mickmicking the alpha go learning method, learning the game 'as a whole', I do not feel this is the right approach at all. Really not. Even then alpha go training started from history before generating examples.
This guy machine learns.
In the case of civ I'd break the various cognitive tasks and train that many module . Maybe a big factorisation will come to be later on. It's definitely not like GO where the complexity can be broken down in an holistic myriad of the same atomic action.
But splitting tactic, strategy, economy into various tasks is intuitively where I'd go.
Actually you don't need data, you only need a strong enough gameplay simulator (which shall not be as slow as the in game simulator) that you can do self-play and reinforcement learning.
People are often misinformed about just how powerful simple behavior trees can be in a game like civ. One core reason to use ML in the first place is because you want to determine how inputs map to outputs. But in many systems of civ, we already have answers to this question. As an example, we know Rationalism gives you weakly more science (weakly more = a number that is zero or positive, but never negative). In fact, we can compute precisely what that value is at any point in the game. We also know that if you want more science, build more campuses and slot rationalism/natural philosophy. So the question of "how do we get more science" can be answered well enough by simply performing those actions. You can skip a lot of work by not over abstracting.

And that extremely "simple" method covers a lot of areas of the game. You can obviously go somewhere between a behavior tree and 'machine learning' but never use something more complicated than you need to.
I think ML style improvement might be most useful for combat, though. That's perhaps the best system that maps onto classic board games. If someone had a method where the AI could take an army and reliably capture a city with a reasonable offensive advantage, and didn't have too many degenerate cases, then we could probably finally put 1upt threads to rest. The rest of how the AI plays can be fixed through editing what's already there and possibly a few hidden tricks (like bonus district adj.) The goal is for the Ai to be fun to play against, it doesn't need to be complex for complexity's sake nor does it need to be optimal/perfect.

I also really don't think most deity players would actually enjoy facing decently competent AIs. They'd have to bring back the civ4 flavor text "Good luck, sucker!"
 
Back
Top Bottom