The AI algorithms, as far as I know, use a leaf-searching method to decide the next move. They evaluate pretty much every possible future positions of the board.
If that's what the civ games did they would be way more powerful than they are now. But it'd also have to be a lot slower. From what I've seen in the civ 5 code, it basically selected a bunch of potential moves from some options (bombard a unit, go heal, take a city) with targets. Then it applied a simple priority to each of these based on what kind of tactical strategy it wants to pursue. Then it just starts looping through the moves, doing the ones that still remain possible.
From what I've seen in a lot of AI testgames, that system could still be mostly there in civ 6 (it definitely still includes tactical movetypes as seen in the xml). The only big change I've seen is that they split some possible tactical moves out and made them part of operational behaviortrees, which ends up forbidding some of the move options. I suppose with the goal of making them more 'coordinated'.
It doesn't really do any of the higher level thinking such as dealing with uncertainties or calculating probabilities. As far as I can tell it doesn't even directly consider that the enemy gets to take a next turn and how to affect that. In civ 5 it does calculate some 'danger areas', but not really in the sense of 'oh, if I dont kill unit x, it can shoot at position y'. It's also noteworthy that the civ 6 AI appears to cheat in the sense that at least some of its functions, -such as deciding whether to attack cities- takes in information about units outside of its vision.
Something more fuzzy with weights could do better, and there are plenty of ways of accomplishing fuzziness without having them become indecisive.
One of the most effective methods would probably be to use a similar style to the alphago system, walking through a bunch of potentially decent moves and their followups (I think they just use a variant of minimax), and analyzing the strength of the resulting board positions with a well trained neural net. That could do really well on the civ tactical game.
Personally for the civ tactical game, I'd probably stay safe and stick to something cheap, like an initial algorithm that determines the order in which the units will move, then loop through all possible moves for every unit, doing a handcrafted board-score evaluation on each outcome, after which the best one is picked. It can be supplemented by for example pre-selecting a few targets on which damage done is especially valued. Also include things like distance to a target city in the evaluation, so that pathfinding can be completely avoided (bugpathing can be used well for mountains etc). That should work well and remain cheap (milliseconds) without compromising on extensibility.
Sadly machinelearning doesnt seem advanced enough yet to be able to do things like figuring out longterm plans on its own, especially not with such a massive amount of inputs as civ has. So I think well probably be stuck either preconfiguring certain gameplans, or sticking to more weighted based systems like the civ games have been using. These can still be surprisingly powerful though if tweaked correctly and if assisted through statemachine-like systems (to for example force them in a war-state). Top players may beat them handily, but most players, who dont know the exact ideal balances, could end up being outpaced by at least one or two civs every game.