MartinHarper
Warlord
- Joined
- Feb 16, 2009
- Messages
- 132
I think AIs should value gifts of military units more highly if they are at war. Maybe 2x?
@CivFanCCs:
Why would you only use your nukes when the AI would use theirs? That's a real world logic that you're using on this game but it doesn't actually make any sense. In this game, no-one cares about the global effects of a nuclear war as it hurts everyone equal. And after a war has already started, nuclear deterrence shouldn't have any effect any more. It's like saying that a weak nation shouldn't use its weapons against a strong nation that declared war against it because the strong nation might use its own weapons. Since the strong nation declared war, it's a pretty reasonable assumption that it will use its weapons. So the weak nation can better use its weapons as effective as possible. The carrier fleet sounds like a very good target as it is a serious threat and contains a large number of units. Of course the weak nation will lose but reading your description that was a foregone conclusion.
I think you might be missing the point though. You're talking about battlefield etiquette. I'm talking about risk vs. reward. You might have a point if these two countries have similar numbers of nukes. Or at the very least, if the AI player has enough to nuke all the other player's cities where a first strike policy could potentially cripple the opposing enemy and destroy many or all of the enemy's ICBMs before they ever launch. But we're not talking about that. We're talking about an AI player that was hopelessly out-gunned in nuclear arms without any possibility to build more (since they'd agreed to the ban).The issue is: why would the AI be coded to think that the war will be a gentlesman quarrel, with unwritten rules about what weapons to use, and not a gloves out street fight, where everything is allowed? A lot of the humans will play the former and saying to the AI to not use the nukes ASAP is pretty much inviting the human enemy to destroy their enemy nukes before the AI finally thinks it is a good idea to use them. And how would the AI discern about a human that will not use nukes before nuked and one that will use them at t0 of war ( or worse, one that will nuke them to submission without using land troops, like I did once) ? Lacking a good way of making the AI aware of the human real intentions ( and roleplay inclination/lack of it )IMHO it is better to say to the AI that all the fights are rabid kale borroka, where they might overuse nukes sometimes , then saying to the AI to try to guess if it is a good idea to not use the nukes just because the human has not used them yet, that would be pretty much the same to say to the AI to wait for it's demise with nukes stored in the backyard some of the times ....
Fair enough. I missed that part.P.S DP II, if you read it right, the poster you are refering to says that the Inca already had lost 2 cities vs a 4 city Shaka acting as proxy of the human in that game. That clearly shows that the Inca were not up to the chalenge in conventional terms IMHO
@ DP II
I guess I haven't expressed myself as clearly as I should ( a common ocurrence when discussing how to program a machine, i guess). In fact I was talking not only of battleship ettiquette, but also of what you called risk vs reward. In fact you can't separate both in this area, because the risk is directly proportional to what the enemy is willing to put in the field. My point was that, because there is no way in the current game of the AI knowing what are the intentions regarding nuke usage ( in mods where that is possible/mandatory, things would be diferent OFC ), when the AI is at disavantage as in the example above, what is better: a) the AI assuming that the enemy will use their vast stockpile of nukes b) assuming the enemy will not use the nukes unless nuked? The issue is , if the human decides to use the nukes, certain destruction is assured no matter what and only if the human decides not using nukes unless provoked ( a thing that the AI has no way to know, remember ) not nuking first might look like a competing option, in spite of being as bad as a option as all the others if the non-nuke side is skewed against them. In both cases ( nukes regardless and nukes if nuked ) , nuking first definitely brings short term advantages, that might be enough to win the game ( see this for a good example .... ) or to stem certain death for a while.
My point in resume: the discussion about possible vs certain discussion you brought might be posed, but it comes after the AI decision ( or lack of it ) about what the enemy ( human or AI ) will do regarding nukes. Due to the ignorance of the AI regarding to how the enemy will act in that regard, in atleast half of the possibilities nuking first is always the better option ( when the enemy will nuke no matter what ) and in the other half ( when the enemy will not nuke unless nuked, the classical prisioners dillemma solution ) it is not clear if not nuking first will be better than nuking first in all cases. Seeing things in this perspective , IMHO, it is objectivelly better for the AI to assume that striking first is a better option than clinging on the hope that their enemy will be a nice guy and refuse to use a weapon they have just because.
(loosely based on the work of the paranoid strategists that planned MAD in RL)
While I don't think he really wants to do that (I think he's actually a much slicker customer than people in the West generally give him credit for), its not really relevant to the current topic. If we assume he's a completely irrational character willing to engage in self-destructive behavior, it does us no good whether this should determine AI behavior. This mod is all about making the AI smarter. Humans do stupid things, but we don't need to code stupid behavior into the AI. The AI does enough stupid things alreadyOn country (few nukes) vs country (lots o nukes) consider Iran, they want nuclear weapons (obviously) because there leader/dictator is crazy: he would build one nuke send it at Israel and waited for the US and Russia to turn his country into radioactive slop. Maybe civs fascist governments could do that.
I disagree. The most powerful nuke is the one not used. If Israel declares war on Iran and Iran responds by nuking Israel, Iran will then be out of bargaining chips. On the other hand, But this is not the same as if Iran says to Israel, "Stop this war or we will nuke you." and then carries it out if Israel refuses. Again, I'm arguing this from the perspective of AI behavior. Iran might very well nuke as soon as Israel declares war, but in terms of what an rational actor would do, delivering an ultimatum would be more effective than simply nuking the enemy.I think it's more likely that Iran wants a nuke for its deterrent effect against Israel. A pre-emptive nuclear strike on Israel would achieve none of Iran's objectives, stated or unstated, and would invite destruction from Israel's superior conventional and nuclear arsenal. However, if Israel declared war on a hypothetical nuclear-armed Iran, I would expect Iran to use those nukes.
If the AI's response to a declaration of war is to nuke the person who declared war on it, then this means that players who don't want to invite that kind of devastation won't declare war on it. This maximises the deterrence effect of its limited nuclear arsenal. Now, some players will declare war on a nuclear power and then complain that they got nuked. However, this is a player failure rather than an AI failure.
The question of whether or not the human player will initiate a nuclear attack is largely irrelevant...
I have used nukes exactly once...