Overkill - why the AI is so bad

Originally posted by Cartouche Bee
So where does it say that a dynamic system can't be improved with a single line of code?

I never said, nor implied, that it couldn't be improved by a "single line of code." Indeed, because complex systems are sensitive to minor changes, the entire game can be completely changed by such a "single line of code," for the better or for the worse. Problem is what line of code, what change, and what are the unintended consequences going to be? Sometimes, you may not even know what the unintended consequences might be, as they may only be apparent in very limited situations that are not determined even after extensive testing.
 
Originally posted by Cartouche Bee
It is really a question of whether you even want to start the journey at all. If you decide to go forward you step, if you decide that it will cause you to wear out your shoes, you go for an ice cream. :)

That's a very good point. The problem with even robust complex systems is that it is hard to institute radical changes without the system becoming unstable. Consider the U.S. Constitutional system. With minor tweaking, the system has evolved considerably over two hundred years, but still shows many signs of age. The Electoral College is one such anachronism. On the other hand, I support the continuation of the Electoral College simply because radical changes may lead to a chaotic situation where fundamental liberties I cherish may be lost. It is the same reason that many freedom-loving Brits support the royalty -- social stability guarantees a continuation of liberty. On the other hand, at some point either system will become decrepit and unable to institute the radical changes which may be required at some future date.

It is really a question of whether you even want to start the journey at all.

To evolve or to revolt, that is the question. Or whether it is better to just go for an ice cream. ;)
 
Zachriel,

I write and modify single lines of code to enhance and improve program/algorithm functionality. Many times I have been told that such and such is impossible or "We were told that was not possible", if you approach a problem from that position it can indeed seem insurmountable. I think this discussion was based on point of view, mine is that single lines of code do contribute to the overall performance and capabilities of any system.

We commonly refer to programs that come apart at the seams, "Lego projects", however, when the need is great enough you just have to get your hands dirty. Touching the octopus is scary but bravery under fire wins the hill!

:)

CB
 
Originally posted by Cartouche Bee
Zachriel,

I write and modify single lines of code to enhance and improve program/algorithm functionality.

There are millions of such changes that I dream up all the time. Most are not "practical." Some "should" be easy. Impractical solutions include those that require a constant call of a recursive routine, such as pathfinding. But even simple changes often require extensive testing. (Testing = Manpower = Time = Money).

The flip side is that they are working on the project. There is an expansion due later this year. Then Civ4, then the world!
 
Originally posted by Zachriel


I just visited the AI forum. They've convinced themselves that they could win most games, if only they could just circumvent the human's use of the Way-Back Machine. Fat chance, I say!


What if the AI could save and restore games?

I don't think it would do the AI any good to restore a saved game unless there is a random number in their decision process. For example, if they declared war and got their a** kicked by a human and/or AI civs they could restore a saved game just before they declared war, but unless you selected "preserve random seed"=OFF they would declare war again.

Some AI programs have a (somewhat) random decision process, something like this:

A position evaluation of 0.000 means that nobody is winning (according to the AI's evaluation function, which may be right or wrong). If the AI is winning, the score will be above zero, and if it is losing, the score will be below zero. Let's say that, in a particular position, there is a crucial decision. The AI's score at this point is -0.087, which is a slight disadvantage for the AI, but nothing serious. There are many, many possible things the AI could do at this point, but let's limit it to the decision to declare war or not. Let's say there are 3 other civs at this point, 2 AI civs and one human. The AI applied its evaluation function (which takes into consideration military strength, war weariness, how much the cities to be taken are worth, how much corruption they would have, culture flipping, how much it costs to take the cities, etc.), and it determines that if it delares war against:



  • * the human, its score will become -0.255 (worse than before)

    * AI civ#2, its score will become +0.136 (better than before)

    * AI civ#3, its score will become +0.139 (better than before)

    * nobody, its score remains the same.

There could be an option for the AI to apply a random number to choose between the moves that would produce the best scores, if the scores were nearly equal. In this case, it could "flip a coin" to determine if it should declare war against civ#2 or civ#3, since they are the best scores and they are nearly equal.

If the AI introduced more randomness into its decision making, it might be more dangerous and produce more interesting games.

Even if the human restored a saved game after losing a war or city, there would be no certainty that the AI would make the same decision. After the human restored the game, the human might win the war, or the AI might stomp the human worse than before!

Of course, this random decision making would be a configuration option, because some players like predictability better than uncertainty! :)
 
In response to some points made in the last few posts:

Playing Civ3 creates a dynamic system (without being played it isn't even a linear system, it's static), if you don't understand this then you don't understand what dynamic means in this context. The easiest (i.e. non-rigourous) way I know of to define a dynamic system is one where the input changes the output which then changes the input. A practical way to spot such systems is where tiny changes result in wildly varying and unpredictable consequences. Civ3 has both of these features (from now on when I say 'Civ3' it's implied that I'm talking about a game of Civ3 and not the code sitting there on the drive).

I can give some specific examples of games where this shows, such as a game I had a few days back. I had the worst starting position I'd ever had, a -huge- jungle to the south, southwest, and west, vast plains with no river for irrigation to the northwest, north, and northeast: the only viable spot to expand in was to the southeast and I didn't find that till I was on my 5th city or so. My first two cities couldn't even make settlers, they couldn't reach 3 population. This game should have been my worst ever, but it turned out to be my most high scoring game to date because it forced me into actions (early military attacks) that snowballed to an overwhelming victory on my part. A counter-example is a game where my start was so strong that I overreached myself, got too many AIs mad at me at the same time, and ended up crushed. Inability to predict future states is a hallmark of complex dynamic systems.

I want to insert a quote from Zachriel here because he said it perfectly:

"I never said, nor implied, that it couldn't be improved by a "single line of code." Indeed, because complex systems are sensitive to minor changes, the entire game can be completely changed by such a "single line of code," for the better or for the worse. Problem is what line of code, what change, and what are the unintended consequences going to be? Sometimes, you may not even know what the unintended consequences might be, as they may only be apparent in very limited situations that are not determined even after extensive testing."

This is probably the most critical thing to understand about dynamic systems: you cannot predict the results from the changes you make, no matter how simple the changes. I've seen this in practical ways when it comes to designing rules and game code that are designed to deal with humans. People are complex and dynamic in and of themselves, they are a moving target. If you design your AI around what they are doing today then they'll do something different tomorrow in response to that. Firaxis could release a brand new AI every 30 days and 30 days later that AI would be known to be full of loopholes, aberrant behaviour, and exploitable to the bone. At some point they have to say 'This is good enough' and move on.

You don't see this often in retail games because most companies stop after a couple of patches, but in Muds (multi-user dungeons, text-based online games) you often see a Mud that has lasted for years suddenly go into a death spiral because a well-intentioned and supposedly 'better' rule change or piece of code is introduced that causes the system to spin out of control. A game -can- be patched to death.

Also note that even the simplest system becomes complex when in a complex enviroment. Take a simple and silly example: everyone you meet today, say only one word to them one time. Pick any word, say "Blue". One word, one time, say nothing to that one person after that, any new person comes along repeat. Now, this is a -very- simple and linear thing, you could make a computer do it with just a few lines of code. But try it in real life and you'll see how complex and unpredictable the result is. I bring this up because you mentioned making coding changes, and I think you are seeing the linear properties of the code (which is exactly right, it is linear) but not the fact that it executes in a dynamic enviroment.

And at this point I feel like I'm -really- pushing the limit for moderator intervention, so I'll step off my 'Bringing complexity theory to the masses' soapbox. It's sort of a thing with me, sorry if I took it too far. :D
 
OK, I guess I have to concede that stopping a 80 unit SOD from chasing around a single warrior will probably break the dynamics of this system and we have no idea what the consequences would be. :cry:
 
Originally posted by Vorlin


And at this point I feel like I'm -really- pushing the limit for moderator intervention, so I'll step off my 'Bringing complexity theory to the masses' soapbox. It's sort of a thing with me, sorry if I took it too far. :D

I think that you are reasonably on topic. Lt. Killer M. posted that the AI has problems with the size of its stacks. In his example, it was overkill. In other cases, they attack with too few. Several suggestions were made as to how to rectify the problem, and all were probably exploitable. This problem is generalized in non-linear dynamics. Ultimately, there is no best strategy, only approximations. And as human history demonstrates profusely, a perfectly reasonable strategy in one situation may be folly in another. If there had been a barbarian uprising, everyone would have thought the AI had advance knowledge (cheated).
 
I think, given the nature of what is possible and how it is done, that testing is currently the limiting factor. What we would really enjoy is if Firaxis had the ability to just try a bunch of different "one line code changes", but then have a good idea of which ones to keep and which ones to ditch before the patch went out. Presumably, several machines could have the AI play against itself and be systematically recorded. Changes that resulted in clear improvements could be kept. Even then, some unintended consequences would slip through that only occurred with a human playing. (And don't forget the consequences to multiplayer, both with and without AI Civs participating!)

No, what would really be required is a certain amount of customization of the AI by the players, maybe even robust scripting. Look at the AI routines for NPCs in a game like Neverwinter Nights, or its Balder's Gate precursers. The original AI for even a simple character was horrible. But players improved it, and each later game incorporated what was learned. Let the players provide all the intensive testing. Known, good AI improvements will get incorporated into more games. Unintended consequences will be found. Since some of the unintended consequences are even good, this would work. The AI would still be exploited by those who chose to do so (especially since people saw exactly how it worked), but the net play ability would increase.
 
Stuff like that is hard to code, because it involves value: when is it worth it to do something? This is an area where humans are infinitely superior to computer code (at least for now). You could literally spend months just writing the AI code to determine whether a unit should retreat or advance (try it with pseudo-code using Civ3 rules as a backdrop, you'll find that it is a daunting task).

Remember, the flaw you mention there is a flaw over time. In other words, it's not a flaw for combat units to try to catch and kill enemy units, it's only a flaw if they can't do so. But how does the AI know when it can or can't? How does it know how the enemy unit will move? It can't simply say "If the enemy is the same speed as my pursuing units then don't bother trying to attack". You'd have to write performance tracking code that kept a database over time and then analyzed the data to determine if the current course of action is producing results (and boy oh boy isn't there a lot of coding involved in that little sentence). And then you -still- have exploitable code. Say you complete the gigantic task of writing what I mentioned above: you now have code that allows the AI to break off pursuit after (say) 6 unsuccessful attempts to close and attack. Well, word gets out in the Civ3 community and then everyone just runs for 6 turns, then pursuit stops and the human player now has an 'immune' unit. So to counteract -this- you have to write code that allows the AI to know when to reaquire a unit as a target after it has decided it isn't a valid target. But then the human player runs again so the AI disengages after 6 turns, and once again we have an infinite loop like the one that started this discussion, except this loop is incredibly more complex to code.

So even simple things aren't simple when complexity is involved. ;)
 
Originally posted by Lt. 'Killer' M.
I signed a ROP with the Zulu. Since then, I watch over 80 units move around my territory, hunting down the 1 or 2 barbarian Warriors that roam the Tundra to my south.
The AI has absolutely NO CLUE how many forces are sufficient to do a given job. It will throw all at one unti, then move then back the next turn - evn several turns.

STUPID!!!!!!!!!!


Is there anyone out there who has any ideas how this might be fixed easily????

Back to the original post. :)

The only real suggestion, by MeestaDude, has been to create multiple task forces, which should be doable, and is probably already in the code in some fashion. I've been on the receiving end of some pretty big AI task forces. But sometimes it is better to combine one's forces. How many task forces are reasonable? How big? Should one negotiate and bribe your way out of trouble? Should the task forces be combined for the "big battle?"

There is no doubt they can do better (theorem in complexity theory).

If they could just teach the AI to use bombard, even on defense, it would be a big step.
 
Originally posted by Cartouche Bee
OK, I guess I have to concede that stopping a 80 unit SOD from chasing around a single warrior will probably break the dynamics of this system and we have no idea what the consequences would be. :cry:

The eighty unit stack-of-death was probably an unintended result of coding for the "task force" algorithm. ;) Fixing the one problem might be easy, but the general problem remains. I think Lt. Killer M. was making the more general statement from the particulars of that situation. Let me quote:

STUPID!!!!!!!!!!
:lol:


PS. Don't cry. I just had to pay Tassadar tribute in the amount of 13 gold and a world map (another thread).
 
Ah, in my last reply I misunderstood the reference. So to make my reply coherent, let me state that I was responding to the problem of the AI chasing a human-controlled unit that was deliberately fleeing in circles in order to keep the AI units tied up until help arrived. I lost the context of the thread there for a bit. *blush*
 
Back
Top Bottom