Advanced Civ

The AI for naval landings doesn't have this kind of lightweight mode. It won't set sail without a proper Stack of Doom in cargo.
So the AI will snipe an undefended city if transport isn't involved, and it will transport SoDs, but it won't use transports to snipe.

The Galleys that keep passing your island have Unit AI type "explore" or "settler-sea" and they're stuck in those roles unless they entirely run out of things to do.
How much of a hassle would it be to add some flexibility? I can think of a few approaches, but "doable" is not the same as "going to get done".

The fundamental problem is that, beyond looking at the game era, there is no AI evaluation of the adversity that might be encountered after making landfall, and this can't easily be amended because the decision which city to target is made at a later point.
Over land, doesn't the AI size the attacking force based on the defending force? Or are SoD always constructed without reference to a target, and then set loose?
 
So the AI will snipe an undefended city if transport isn't involved, and it will transport SoDs, but it won't use transports to snipe. [...] Over land, doesn't the AI size the attacking force based on the defending force? Or are SoD always constructed without reference to a target, and then set loose?
Land stacks use AI_attackCityMove, which also checks for
iGroupSz >= AI_stackOfDoomExtra()
upfront. But AI_assaultSeaMove takes stackOfDoomExtra times 2. ("stackOfDoomExtra" is a pretty bad function name; it's not any "extra" in this context, and half a dozen units shouldn't qualify as a "stack of doom.") In your savegame, Mansa Musa's stackOfDoomExtra is 4 and he has a city attack stack with exactly 4 units ready. He probably had been preparing for war before you declared war on him.

I've made a small change to AI_assaultSeaMove now: In a non-"total" war, when AI_assaultSeaReinforce has failed, i.e. when there are stackOfDoomExtra units for a "reinforce" mission but no place to send reinforcements to and fewer than 2*stackOfDoomExtra units, then consider an attack against a landmass with at most 2 enemy cities through AI_assaultSeaTransport. That function estimates the strength of the local defenders and takes no action if all potential target cities appear too well defended. With this change, Rome invades your city with two fully loaded Galleys after some 20-odd turns.
How much of a hassle would it be to add some flexibility? I can think of a few approaches, but "doable" is not the same as "going to get done".
There's already code for converting from EXPLORE_SEA to ASSAULT_SEA and from SETTLER_SEA to ASSAULT_SEA, and I've messed with that before and got into a bit of trouble. In one case, a ship kept oscillating between two AI types, in another, the City AI kept producing new ships for the one AI type and the Unit AI kept converting them to the other AI type. The way I'd tend to approach it would be to check if the ship happens to be near an assault stack (or a group of potential city attackers that could form an assault stack) and whether that stack needs just one more transport – i.e. a very narrow condition for conversion. And I wouldn't bother to convert "assault" ships back to civilian roles and would allow cities to replace the converted explorers and settler transports at the discretion of the City AI. Still a hassle; probably not worth it.
 
Have you ever looked into the improvement AI, or do you know if K-Mod changed anything there? Didn't do a full dig through K-Mod's changelog but its overall feature list doesn't mention it.

I feel like improvement placement is a major weakness of the AI. In particular I often notice the AI oscillating between tile improvements, deciding to even replace fully matured towns and villages for a farm sometimes. I am mostly familiar with the vanilla worker AI, and it seems like it is highly locally optimised, i.e. picking the best improvement for a tile based on a complex yield value heuristic, always based on the current state of the city. The AI does not seem to have a long term plan for what a fully developed city grid should look like at all, which explains the perceived indecisiveness as the AI changes its mind on the best tile improvement on a dime because some new building or technologies pushed the heuristic in another direction by a few points.

Even rudimentary player strategies operate completely differently from that. Usually, players assign each of their cities a role out of (roughly): commerce city, military production city, wonder production city, great people city. Having cities that strike an optimised balance between all three yields, like the AI code often encourages, is considered bad play because it makes suboptimal use of buildings and national wonders. If a role has been decided for a city, the choice of improvement is generally obvious: commerce cities prefer cottages over all on flat tiles and windmills over mines, production cities prefer workshop/watermill on flat tiles and mines over windmills, great people cities prefer food above all. Then the major question becomes where those preferred improvements have to be replaced with extra food from farms/mines to produce enough food to work all those tiles, if necessary.

Of course there are a few more wrinkles to this, such as resources should always be developed with their corresponding improvements, and you may need to set aside some production tiles to allow developing commerce/GP cities and maybe keep some forests in reserve for health or rush production purposes. And the trade off between the "final" end game development of the city and the currently preferred improvements can be difficult because of happiness limits on maximum worked tiles and changing improvement yields with tech/civics, but this seems still a much more effective strategy than the current AI heuristic allows, even though it does not delve nearly as deeply into the possibility space.

I guess the point of this essay is me making the case for improvements on the improvement AI. And it's relevant because it affects how much you could automate your workers, how annoying conquered AI land can be, and how well the AI is doing economically overall. Do you think you're ever going to improve something in that direction?
 
Have you ever looked into the improvement AI, or do you know if K-Mod changed anything there? Didn't do a full dig through K-Mod's changelog but its overall feature list doesn't mention it.
A lot of tweaks, especially to the evaluation of chopping, food and upgradable improvements (Cottage, ...). I've gone through the commit history of CvCityAI and spotted some 30 relevant commits. These seem to be the most significant ones (in chronological order):I don't generally pay close attention to AI terrain improvements. (I do keep an eye on the overall distribution via the Statistics screen.) So, fwiw, my impression is that it's quite well polished and I wouldn't consider going back to the drawing board. I've just taken a look at some AI cities in an ongoing game of mine; I don't know. The AI certainly takes advantage of national wonders and some cities without national wonders also seem reasonably specialized; others not at all. The first thing I'd tend to look into for more specialization is city placement, which I know doesn't take city roles into account at all.
[...] The AI does not seem to have a long term plan for what a fully developed city grid should look like at all, which explains the perceived indecisiveness as the AI changes its mind on the best tile improvement on a dime because some new building or technologies pushed the heuristic in another direction by a few points. [...]
E.g. when the AI discovers Chemistry and then spams Workshops. Why would that be a mistake? I guess – because worker turns have an economic value; because overbuilt Towns and Villages can't be easily restored; because the total yield rate might shift too far from food and commerce to production. I would hope that all these issues are already somehow taken into account. One other thing that comes to mind and probably isn't addressed is that the CityAI might construct e.g. Dry Docks in cities that will later revert to a non-production role.

Two more notes (you may already be aware):
• There are some unused AI_CITY_ROLE... defines in BtS' AI_Defines.h, probably added and then abandoned by Blake.
• The views of veteran players on city specialization seem to have shifted a bit in the last few years:
City specialization - this is a bit outdated concept as well..not completely irrelevant, but again, something I would not overthink at this stage. [...] What the city specialization concept ended up doing for players, including me for a long time, is hamstringing them into a certain way of play such as farming everything in one city, cottaging everything in another, building mines in another without really giving thought to better things those workers and those cities could be doing...especially early on.
 
Thanks! Good to know you improved this AI quite a bit. Will peruse the diffs when the time of day is more conducive to that kind of thing.

And that quote about player strategies makes sense, I guess a lot of it is how best practices become generalised to be widely shared and then are taken as sacred, even though things are more nuanced.
 
About that, what you -as human player- think it's the best strategy or placement is not necessarily the best move! Human players usually tend to apply a fixed strategy and roleplay to some extent (i.e. you have a "mission" in your mind, and you try hard to fulfill it, ignoring other problems). I am not saying the AI logic currently is the best one, but we can be sure we would not recognize the best logic at all. Best players may be near to that best logic, but the best one would make strange moves based on microcalculations every turn and seem totally random.

If an improvement is the best one to place on that tile requires calculations not only for the tile and the nearest city but the entire player country. A cottage may seem like the best improvement on a commerce city but a mine may allow the city to produce a unit per turn, instead of unit /2 turns. That simple logic -don't know if the AI currently checks for things like that- could be seen as "random" by us, unless you see what's happening on the AI side.

Machine learning, with genetic algorithms would greatly help on this aspect. But we can already make some basic tests assigning different weights and/or AI logic[*] to different AI players and playing multiple games with AI autoplay. After many games, statistics would show us what are the best strategies with the current rule games. Then you can apply those values or AI logics as base, and further tweak them to find better ones. Finally, those routines or values could be assigned to difficulty levels, so better optimized ones would be the hardest levels.

*It would not be unreasonable to create a parallel new worker AI logic to follow city roles, and apply that routine to only a few AI players. Instead of changing the current one! But that it's just an example, sure there are other AI routines much more relevant or not optimized.
 
Last edited:
I work in Data Engineering, so I am familiar with the magical "using machine learning and genetic algorithms" wand people who had an introductory course on the subject like to wave around. The Civ4 AI is not only far away from using ML but also far away from being able of benefitting from ML. It's a simple expert system.

I agree with your point that human strategies are broad and elide small optimisations an AI is capable of making. The strength of a computer is that it has easy and immediate access to all variables a human does not, and can calculate and extrapolate things that a human is not able to.

However, your naive assumption that the AI currently does use its information in such an effective way not necessarily, and most likely not, true. Being an expert system, the AI largely is using a human strategy, i.e. the strategy of the human(s) who programmed it. This is why I raised my question, because it seems like the AI is mostly using the framework and assumptions of the original Firaxis programmer who implemented it (who btw I am 90% sure just was Soren Johnson), ignoring most learnings the vastly larger and more experienced player community has made since then. Unless of course, the lineage of AI mods AdvCiv is based on has incorporated those learnings, hence my question.

Part of the appeal and challenge of 4X strategy games comes from its multiple layers and how those layers interact with each other. This makes the problem of determining a strategy extremely complex, because all of those layers have their own strategies that need to most effectively align. It is essentially infeasible to represent this problem as just one huge parameterised optimisation problem, and there is no simple success heuristic to effectively optimise this problem on. Basically, Civ4 is not a spam filter or autocomplete bar.
 
I don't agree with some of your statements... ML can be applied with relative ease to specific AI routines with the current rules. Not sure what that has to do with programming the entire civ4 AI via ML, which obviously makes no sense right now. I consider the same error applying the magical wand without sense than being in the opposite side.

I don't buy the "it's not a one huge parameterized optimisation problem" argument to deny there are specific areas in which the method can be applied.

And I did not make that naive assumption about what the AI does. I even said "I am not saying the AI logic currently is the best one"... lol.
I think you read on my answer what you wanted to read, instead of my words... everybody already agree the worker AI can be improved. But how? And why?

My concerns were about your proposed AI routine based on players strategy being the best. Why? Players doing that doesn't mean it's the best strategy, neither better than the current one! Then on the next post you change your mind about it because of f1rpo answer, but we have no data to prove any of those statements (even if expert players usually follow the best strategies): we have no data for neither the city role approach, nor the current one -Soren AI-! So back to the start, applying statistical methods to measure how the AI is doing would at least provide a better answer than we are currently doing. And that was my suggestion. Backing statements and AI improvements with measurable facts. And Civ4 being a game with AI autoplay builtin capabilities and such a range of tools for analysis, it's perfect for that approach.

Civ4 not being a simple system does not necessarily equate to being impossible to optimize at all levels. That's again a statement without any proven base, just because it sounds great. Given 10 AI routines, some will do better than others according to the game rules. That's why K-mod has improved the AI, otherwise there would be no point for this mod. AI routines and parameters performance -both- can be measured and optimized within some range, and applying statistical methods -similar to ML and genetic algorithms- would be easier than finding the right code which magically works. With the benefit that we would know how great the improvements on AI are, whenever we change an AI routine.

Worker AI would be a great candidate for that approach. AI city placement would be a problem solvable by statistical methods too (specially the first city). Being examples of "impossible": the AI routines related to stacks, movement and attack.
 
Last edited:
So back to the start, applying statistical methods to measure how the AI is doing would at least provide a better answer than we are currently doing. And that was my suggestion. Backing statements and AI improvements with measurable facts. And Civ4 being a game with AI autoplay builtin capabilities and such a range of tools for analysis, it's perfect for that approach.

Being a (sort of a data) scientist myself, I am very much in favor of letting statistical analyses inform and evaluate strategies. What sucks concerning civ is that a single AI autoplay game takes quite some time and is subject to so much random noise, so one would need hundreds of games to get at least some rough idea on what is going on. Indeed, I created the savemap script f1rpo recently integrated into Advciv for exactly this purpose (reducing noise), but I still think that a lot of games would be necessary to get some an idea which strategies work why. That said, the idea of having some parameterized strategic function sound quite neat.
 
Something odd I just noticed. Normally when you hover over a resource that another civ is willing to export in the (BUG?) Foreign Advisor, it reports how many of that resource the civ has. However if a trade discussion window is open in the background with any civ, it instead says how much my civ has available to trade, and what my civ is willing to pay for it.
 
@PieceOfMind: Thanks. Will be fixed in the proper v0.97 release that I'm inching toward. I'll play some more turns in my ongoing game, then I'll run a bunch of AI Auto Play tests for various settings.

One more item added to the release notes:
AI civs start with only one free Scout on Immortal. (Basically, I don't think additional exploration units are a good way to ramp up the difficulty. Earlier versions of AdvCiv have already taken away the second free Scout on Emperor, one of the free Archers on Immortal and Deity – and the free Settler on Deity.) [advc.250e]
[...] it seems like the AI is mostly using the framework and assumptions of the original Firaxis programmer who implemented it (who btw I am 90% sure just was Soren Johnson), ignoring most learnings the vastly larger and more experienced player community has made since then. Unless of course, the lineage of AI mods AdvCiv is based on has incorporated those learnings, hence my question.
I also seem to recall (90% sure sounds right) reading in an interview that Soren wrote the entire AI. Perhaps not be taken quite literally because the pathfinder is credited to one Casey O'Toole "based off of A* Explorer from 'AI Game Programming Wisdom'" in FAStarNode.h. An answer of Soren in his recent Reddit AMA also kind of confirms that he was the main AI programmer:
Spoiler :
S. Johnson said:
[...] I had a system in place where the AI would play itself and pause after 400 turns or so. Back then, computers were slower, so I had to run it each night right before leaving, so the first thing I always saw in the morning is how the AI was currently doing. It was usually pretty obvious if I screwed it up somehow! We have a similar system with Old World where AI test runs are done automatically, now as part of a unit test process.
source
In some cases, the whole approach of his AI code is flawed. Not sure if that's true for terrain improvements. But even if explicit city roles are needed, one could hopefully add them in some minimally invasive way that avoids replacing mature code.
[...] everybody already agree the worker AI can be improved. But how? And why?
When it comes to automated workers, my impression is that the poor scheduling is the worst offender. It takes automated workers too long to drop what they're doing and go where they're needed most. Can't really be helped I think. For the AI civs, I feel that I've alleviated the issue by letting AI cities produce more workers than a human player would use.

As for statistical analyses: There are plenty of magic constants in the code that could be optimized through experimentation, but this doesn't seem like an efficient use of anyone's time (see xyx's post), especially so long as there are still glaring omissions in some parts of the AI. In the context of some hypothetical new Civ game with a new AI, I've been wondering (idly) whether the game rules could be designed in a way that facilitates the use of deep learning. Can't really think of anything that wouldn't benefit other AI methods as well. Punishing, complex gameplay generally seems like bad news for an AI. But, in any case, (as someone not well familiar with neural networks) I'd worry that an (under-)trained network would frequently make needlessly puzzling decisions or near-optimal decisions when the optimal decision should be obvious.
AI city placement would be a problem solvable by statistical methods too (specially the first city).
For finding fair starting positions, some kind of intelligent algorithm could indeed be useful. Computing a measure of fairness for a given set of starting locations doesn't seem too challenging, but it's infeasible to do that for every possible assignment of players to potential starting locations. That said, I don't think I'd want to deal with a library for genetic programming or reinforcement learning for this; probably some ad-hoc heuristic search would do.
 
For some reason I can't build the Apollo Program which makes me unable to win a space race victory. I have researched Rocketry but Apollo Program doesn't appear in the building list:

Spoiler :





I can provide the save file if needed but I can reproduce the bug by just starting a new game and giving myself all techs with WorldBuilder.

Interestingly the Space Elevator is also missing, so I thought that I might have accidentally disabled Space Race victories but that's not the case (it appears on the victory screen). Other buildings that require a certain victory condition to be enabled, such as United Nations, work normally. I also checked that I can build the Apollo Program in unmodded BtS as well as K-Mod so it's not an issue with the base game files either.
 
Mazal tov dear f1rpo,
Just got the update that you uploaded 097.
Salute you for the amount of hours you put into it.
You're fast! Thanks. :D

For some reason I can't build the Apollo Program which makes me unable to win a space race victory. [...]
Thank you also. This bug "only" affects games started with a recent version (0.97-pre, hopefully not 0.96e(?)), which is one reason I didn't notice it. I've uploaded v0.97a now, which should fix the problem in old/new/any saves and in new games.

A couple of other last-minute changes:

Fixed this issue:
In Rise and Fall, the dotmap leaks info between chapters: [...] It would be nice to remember the dotmap in case I return to a given civ [...]
(But not any of the other open issues with the city dotmap.) Turns out that the dotmap can already handle player changes, so the DLL just needs to report a HotSeatPlayerSwitch event to Python (Git commit).

City trades: The stricter AI attitude threshold now applies only when the AI owner of a city has at least 20% nationality (was 10% in the pre-release versions). The recipient still needs to have at least 10% nationality.
 
Thanks for the great update. I'm loving the new options.

Did you notice any issues with the AI refusing to settle new cities? I've run a few tests and each AI will build a settler, but refuse to settle a new city. This has been observed for over 750 turns in a game. They are sitting on huge stockpiles of gold. It may be something my mod-mod has done . . . but that wasn't an issue with the prior version.
 
@Cruiser76: They never found a second city? I think I would notice that. So it's like this earlier issue
[...] The other major issue is that some AI refuse to expand. They will build a settler, but just let it sit inside the initial city. They will wait way too long before settling the second city. This doesn’t make any sense and sets them back.
but worse? As I wrote then, to narrow the problem down, it would be helpful to know if the AI has any planned city sites:
AI city sites are shown in Debug mode when yield icons are enabled. (One may have to enter a city screen after enabling yield icons to make the game draw the circles.)
Also, holding down Alt while hovering over a tile in debug mode will show the AI found value of that tile if any AI civ has a positive found value.

While I'm digging up old posts: Seems like the Space victory bug was already present in v0.96e. :( (Though it would've affected the human player too I think ... :undecide:)
Is there a way to edit the Apollo Program or other projects in the files? I had a game where none of the AI's ever built it (and thus no competition for space), [...].
 
Yes, no second city. I somehow resolved the issue before, but don't remember what I did. It was sporadic before, but this time it is every AI player.

In debug mode, they all have multiple city sites and found values appear to be very high (as they should be, important strategic resources available).

P.S. Early AI improvements and research are better. They are consistently researching the right techs and improving resources with the right improvement.
 
ML can be applied with relative ease to specific AI routines with the current rules. Not sure what that has to do with programming the entire civ4 AI via ML, which obviously makes no sense right now.
My point was that ML can not work unless you are able to close the feedback loop between the parameters under ML optimisation and the output the resulting parameterised algorithms produce, and do so in a way that tracks with how we want the AI to behave in the game. We cannot just pick and choose a subroutine like you suggest and then judge its performance on some overall metric like AI success. You would have to find a heuristic that measures the actual success of e.g. city placement, which is an arbitrary choice that also relies on expert knowledge. And we do not have such a heuristic right now, at least I do not see an obvious one. ML is all about this kind of problem, and it's not something you can delegate to an algorithm. You have to set the constraints of ML yourself.

I think you read on my answer what you wanted to read, instead of my words...
I think you're taking this much too personally. I do not even disagree with your post, my point was that it was too vague and superficial and non-specific to be in any way a useful addition to the discussion. "Have you tried ML?" is just one notch above "have you tried algorithms" in its abstractness.

My concerns were about your proposed AI routine based on players strategy being the best. Why? Players doing that doesn't mean it's the best strategy, neither better than the current one! Then on the next post you change your mind about it because of f1rpo answer, but we have no data to prove any of those statements (even if expert players usually follow the best strategies)
Obviously not, but the "best strategy" is your goalpost and imo not a very useful one. It's not relevant if player strategies are "best strategy", we know enough to say that player strategies are superior to current AI strategies, so it's useful to discuss what we can learn from player strategies to improve the existing AI. That's why I think lofty appeals to having ML solve all our problems are unhelpful distractions. Realistically, the AI is and will be largely an expert system, and the discussion should be if the "expert" knowledge should be contained to Soren Johnson's personal experience from probably MP sessions inside Firaxis, or the collective experience of the much larger community over a much longer time period. Neither is inherently flawless or even better, but I don't think it's any reach at all that this amount of communication and iteration has resulted in some improvement.

My paragraph you are focusing on was intended as an illustration of that fact, and a contrast of the first principles of how common player strategies approach the problem vs how the AI approaches the problem, and even makes mention of the fact that it elides optimisations the AI could make and how those strategies are tailed to a more macro level human perspective.

Lastly, I do not "change my mind" in the next post. My whole goal here was to have a discussion and so obviously part of that is to learn new information and to take that into account when forming my opinion. That's especially true when it comes from f1rpo's perspective because I trust his expertise on the subject (as clearly exceeding mine) and his point was actually substantiated. I am here as much to learn as I am to argue for my point of view.

I find it telling that this eludes you while you are also so defensive of your own post.

That's again a statement without any proven base.
How about you lead with proof, preferably proof of concept, or at least substance? Everything you've said so far are superficial theoretical concepts and abstract ideas. You do not get to ask others for proof when you're not willing to put in any effort of your own. Have you even seen the Civ4 AI code as it is?

But, in any case, (as someone not well familiar with neural networks) I'd worry that an (under-)trained network would frequently make needlessly puzzling decisions or near-optimal decisions when the optimal decision should be obvious.
Yes, I would also be extremely worried about this behaviour of ML trained AI opponents even if their training was feasible.

I've been following the development of AlphaStar (the Google deep mind Starcraft 2 AI) quite closely over the last few years, and it's really interesting to watch both as a fan of Starcraft and artificial intelligence. It was hilarious but also illuminating to see the AI choose strategies that no human ever would, and whose efficacy is still actively debated among the community. Most notably though is that the AI would constantly abuse its benefits of being an AI (perfect control, global awareness, massive multitasking) in ways that made some people question if it is even desirable to play against an opponent that "feels" like that. It's an entirely different experience.

I think that's important because more than in Starcraft, role playing is an important element of the Civ4 AI. Weird but effective may be acceptable (to some) in a competitive RTS, but here we also expect the AI to follow patterns of decision that are parseable to the human mind.

For finding fair starting positions, some kind of intelligent algorithm could indeed be useful. Computing a measure of fairness for a given set of starting locations doesn't seem too challenging, but it's infeasible to do that for every possible assignment of players to potential starting locations. That said, I don't think I'd want to deal with a library for genetic programming or reinforcement learning for this; probably some ad-hoc heuristic search would do.
At least that's a sufficiently self contained algorithm to attempt ML on, yeah.
 
In debug mode, they all have multiple city sites and found values appear to be very high (as they should be, important strategic resources available).
Probably a lack of escort units then. (Come to think of it, if there were no city sites the AI would eventually delete its settlers.) I'm attaching a DLL with some logging code added to the settler AI routines. Will create and write to BBAI.log if MessageLog is set to 1 in CivilizationIV.ini. Also has assertions enabled.

By reviewing those logs, I've noticed that there is a bit of a problem with the settler for the 2nd city getting produced when the AI has just 1 or 2 units in its capital. It likes to keep 2 as defenders, so the settler may remain idle until 1 or 2 more units have been produced. In the attached DLL, I've already addressed this problem by allowing the AI to leave just 1 defender in the capital if it hasn't met any human player yet :mischief: and by ensuring that 2 defenders are available before starting to produce a settler. It's conceivable that this somehow solves your problem; I doubt it, but at least the log should help diagnose the problem. For example, here's a log (excerpt) where that new AI code gets used ("city defender escorts settler"); specifically, Sitting Bull lets a Warrior accompany the settler while an Archer stays in the capital.
Spoiler :
Player 4 (Amerindian Empire) setTurnActive for turn 46 (2160 BC)
Player 4 (Amerindian Empire) has 1 cities, 3 pop, 35 power, 90 tech percent
Team 4 has met: 0,1,2,3,
AI_settleMove: New - city defender at (27,23) escorts settler
AI_settleMove: Moving settler at (27,23)
AI_settleMove: Going through 4 city sites
AI_settleMove: Best found value in area: 2268
AI_settleMove: Considering to move to site (31,22)
AI_settleMove: Value of site: 2268
AI_settleMove: Value adjusted to distance: 324000
AI_settleMove: New best site
AI_settleMove: Considering to move to site (24,18)
AI_settleMove: Value of site: 2159
AI_settleMove: Value adjusted to distance: 239888
AI_settleMove: Considering to move to site (23,24)
AI_settleMove: Value of site: 1810
AI_settleMove: Value adjusted to distance: 258571
AI_settleMove: Considering to move to site (29,18)
AI_settleMove: Value of site: 1742
AI_settleMove: Value adjusted to distance: 193555
Settler heading for site 31, 22

@Leoreth: I've been meaning to keep an eye out for AI research applied to FreeCiv; you've reminded me of that. Here's a paper published in March (Springer Link):
Playing a Strategy Game with Knowledge-Based Reinforcement Learning
Very much based on expert knowledge, with RL (not involving neural networks) just for conflict resolution. (Can't read all that right now, but I hope I'll get around to it.) Perhaps another indication that Deep Learning for Civ isn't around the corner. The same company (Arago) had previously, 2016 through 2018 at least, called on FreeCiv players to help train its "HIRO" AI; I guess they've abandoned that.
 

Attachments

  • CvGameCoreDLL.zip
    4.3 MB · Views: 112
Hey, I just updated to your attached game core and ran a test game. I started getting several assertion errors, which I hope provide some insight. The last two at least look relevant to this issue. The game was acting very buggy and crashing when I would leave the game window. I thought this was enough to at least give you an idea on what is going on.

#1: CvTeam.cpp, 3983, CvTeam::getBestKnownTechScorePercent, iBestKnownTechScorePercent >= iOurTechScore

#2: InvasionGraph.cpp, 1127, InvasionGraph::Node::step, defCities <= 0, No typical garrison unit found

#3: InvasionGraph.cpp, 1166, InvasionGraph::Node::step, lnoGuard Unit ll defCities <= 0

Edit: I couldn’t find a BBAI log.
 
Last edited:
Top Bottom