Well, been away from "my" thread just a bit, and it's moving very fast... Had a lot of catching up to do. Here are a few things I picked up on the way. Sorry if some of them now seem like yesterday's story (not that this whole thread has been very embarassed by consistency, but hey, at least it's fun

) :
(Quote:
Originally Posted by NapoléonPremier
One thing we should probably never do with science is say : "that's never gonna happen ; it can't ; it won't."
If you had told folks in 1900 than less than 70 years later men would walk on the moon, they would probably have laughed at you. Or at the beginning of World War II that about five years later you would be able to destroy a whole city with just one "atomic" bomb, same thing. And so on.)
dbergan said:
This comment needs a little perspective. If in the 1600s you had said (as many scientists did) that you would never get gold from lead via a chemical process you would have been right. Later on we found out that both gold and lead were elements. Science isn't magic. It can't do everything.
But you can never know in advance. Nothing absolutely warrants that, some day, for instance at the quantum level, it won't be possible to change lead into gold or into anything else (even if the energy required to do so doesn't make it worth it). Saying "it will never happen" is the worst attitude you can have in science (and probably in life in general, for that matter).
dbergan said:
I think there is one thing that people are overlooking... the amount of time it takes the AI to make a move. Currently it is programed to do a civ's turn in under a second (most of the AI's time is updating the graphics on our screen... when you don't see a civ, their turn is over almost immediately). Computers have the processing power to do more, but do players want to wait? Are we willing to wait 10+ minutes for a good move like what a chess program does on its top settings?
That is very true. Given the number of parameters to deal with and the extremely fast response time, the Civ AI can actually be viewed like some sort of genius in that regard.
Breunor said:
As an aside, (this maybe belongs on a different thread, sorry!) people ask me about chess vs. CIV. Its hard to answer -- I don't know myself. To me they are totally different experiences. Now that I'm in my 50's the chess scene is a little more recreational. How do you view this?
Chess is a noble game. But it's boring. Because, basically, it's maths. And on top of it now that most chess AI can kick your ass to the ground just like that and spit in your face, it's also depressing.
dbergan said:
Is the SDK going to allow AI manipulating? I think that would make things very interesting... could even get some AI vs AI tournaments going to see who makes the best script.
I think what will be REALLY REALLY interesting when the SDK comes out, in regard to the subject of my OP, is to see -if the software allows it- if some really motivated people will be able to come up with a significantly different way of running the AI, or if it will "just" be tweakings of the existing AI.
Pawel said:
The big decisions are how to interact with the others. What does it mean that I see forces on my border? Should I attack someone who is overexpanding and probably is vulnerable? The AI rarely does this. It is bad in assessing who is the greatest threat, and tends to deploy forces evenly, not taking into account that it might be better to have more either at a central location from which it could reinforce a sector under threat using inner lines, or having more units where the threat is the largest to begin with. In forming relations it usually rarely shows a strategy. To simplify a little, it just dislikes its neighbors.
Well, see, after all the AI is very human-like...
(Originally Posted by warpus
That's fine and dandy, but how exactly do you quantify how successful or not a decision was?)
Zombie69 said:
Easy, pit different AIs against one another, and then from the pool of different AIs you have, use a genetic algorithm so that AIs that won more games get to "mate" more and have a bigger percentage of their "genes" (probably the weights of the nodes in the neural net) represented in the next generation. Do this over a few hundreds of generations and you can get something really good.
That's how i did it anyway when faced with a similar task in an AI course that i took. It works very well too.
Well, that can't work, because the theory of evolution is a hoax. You would have to CREATE a perfect AI. In seven (or six) days preferably. Just kidding. Sorry.
warpus said:
My line of reasoning goes as follows: Humans are machines. What would prevent other machines, even if they're designed by humans, to be sentient as well? Are you saying humans are special in some regard that can't be duplicated?
Well, yeah, they're different. Everything that is
alive is different. We're not just "machines", or not only. "Life" is something very very special (and btw note that the leap from inanimate matter to living matter is still officially a scientific mystery). If "life" is comething that cannot be created (it wasn't created in man or animals either), however, I think it seems very likely that someday we'll be able to create a machine that will perfectly emulate a sentient and intelligent being. So it won't really be sentient, but for all practical matters it will be exactly as if.
atreas said:
If you want to look for a better "game example" to see what AI can and can't do, you should better look for other kinds of games. One more suitable example is backgammon, where you also have the "stochastic nature" of the dices. Another example could also be bridge, where you have the "not known world". But definitely chess isn't the appropriate pattern for civ (it's too different in two crucial aspects). If you look at AI achievements in such games, you will find examples like the remark of the World Champion in backgammon after a match against a program: he said that "it's STRATEGIC evaluation is far better than mine" (note that he is not talking about AI's CALCULATING ability). That means, there are statistical methods available that could create an AI that plays extremely strongly in civ - the main point is that AI must first use some method of statistical learning.
Thank you. That's an important element of answer to my original question. So it
would be possible, with the appropriate investments, to get a strategically smarter AI. That's cool.
Zombie69 said:
There's a lot of stuff in quantum mechanics that i don't agree with, and that i truly believe will be disproven in due time. You see, i believe in absolute determinism.
Well, "absolute determinism" was a fad of the 19th century, when modern science was really beginning to explode. Since all that was known was the molecular level, French mathematician Laplace formulated it this way : if you considered a spirit (which he called Daimon), who could know exactly the position and movement of every single particle in the universe, then he would know the exact past, present and future of the whole universe. Of course, since then, science has complexified a lot more, and precisely quantum mechanics and the like have hinted (and have just begun to do so) that things are much more subtle than that - as some subdiscussions in that thread also illustrate. And just "not agreeing with" quantum mechanics is a bit insufficient. It's a body of scientific work. You have to give a more convincing body of interpretations of the facts it accounts for if you want to prove it wrong.
Buckets said:
So what if the only AI smart enough to challenge a human doesn't want to play?
That's funny...
d80tb7 said:
sorry about all physics stuff but I'm good at physics and rubbish at CIV so it's one of the few times I get to post things i know about on this forum
Don't apologize... In real life, you're probably the kind of guy that devises the nukes and so on with which Montezuma, Caesar and the like try to blast each other's head off...
whb said:
Biologically, most of the processes going on in the brain do not look very much like logic at all. Which is one of the reasons why students struggle so much trying to learn how to do formal mathematical proofs.
Humans use very arguable and informal methods in their decision making, often based much more on experience than axioms and inference. Watch an episode of Deal Or No Deal, and you will see people changing their tactics as to how they pick a number of a random box based on whether choosing birthdays gave a good result or a bad one last time.
Actually, using pure logic would be a jeopardy to survival in real life. On a daily basis, our brain mostly uses what is called a "heuristic of representativity", based mostly, like you said, on experience, which allows fast and, most of the time, efficient decision. If you try to make a purely logical decision when the lion comes near you, you'll very likely be dead...(and actually, maybe some people did, but they died, so their genes didn't get passed on, so you can say that maybe "purely logical" brains have been "selected out" by evolution...
dbergan said:
Actually, Taosm suggests exactly what I have been talking about... that there is a "way" (standard) of the universe that we are supposed to walk in and not step outside of. That it is only by the Tao that things can be understood.
Thanks to Civ 4 we all know about Taoism.
Yeah, I'm sure that Lao Tseu would be very proud to know that all of his work to unveil the mysteries of the universe has ended up as a gimmick in a video game...
atreas said:
Wait a bit, in a few moments UFO will enter the topic and explain us everything - especially how the speed of light affects the free will that affects the artificial intelligence that affects Kasparov that affects CIV 4.
Dusty Monkey said:
I care because it is the path of least resistance.
Well, it doesn't seem logical. The path of least resistance would seem to be, precisely, NOT to care. It's much easier.
Dusty Monkey said:
Having a justice system does not mean that society believes in free will. Society as a whole does believe in the concept, but that is not what it means. Another logical fallacy of some sort here. One does not lead to the other.
Well, maybe not ALL justice systems, but in our societies it's linked. We try people as being responsible individuals, that are capable of choosing between right and wrong. That's why (theoretically) we don't try the mentally impaired.
Dusty Monkey said:
People believe that they have been abducted by aliens too. A lot more than one person believes it. Is that observation also credible?
Have you noticed how these "abductions" happen almost exclusively to Americans ? Strange. Guess the ETs don't really care for the rest of us.
