Could Deep Blue play a smarter AI ?

Status
Not open for further replies.
Dusty Monkey said:
Poor logic.

Since when does human (or any other physical life-forms) inability to determine something (be it impossible or unreachable) make it non-deterministic??

One does not lead to the other.

I hope you guys don't make me break out my big book of logical fallacies.

You'd better break out a big book of quantum physics, since you obviously don't know anything about it.
 
warpus said:
You'd better break out a big book of quantum physics, since you obviously don't know anything about it.

Has the many-worlds interpretation of quantum theory been disproven while I wasn't looking?

sigh... armchair scientists...

My journey into quantum theory is based on a desire to understand how quantum computing will work - are you under the opinion that quantum computers will be non-deterministic? :rolleyes:
 
Dusty Monkey said:
umm... huh?

Step by step, explain your thinking. Declaring something doesnt make it true. This seems to be a big problem in this thread.

People here declare that X leads to Y but do not show why that is the case.

I am fairly certain that you cannot show why X leads to Y in this case, just as the guy you admire cannot show why having a justice system leads to free will existing.

Um, are you being sarcastic? It's fairly obvious, at least I thought it was.

Deterministic = no choice, all is predestined
Free Will = choice, not everything is predestined

If you 'choose' the path of least resistance, that means you believe in free will. If there were no free will, then there's no choice to be made, since choice is impossible. The illusion of choice may be there, but since it's all pre-determined, written in stone as it were, there is no real choice that can be made.
If you are talking about something else, let us know. This is a no-brainer, eh? Belief in predestination is listed as a definition of determanism in my dictonary.
Also, there's no way to "prove" either side, it's simply a matter of what you believe in, so quit wasting time asking for proof of the impossible.
I believe in free will because the alternative, predestination, sucks.
 
5cats said:
Um, are you being sarcastic? It's fairly obvious, at least I thought it was.

Deterministic = no choice, all is predestined
Free Will = choice, not everything is predestined

Thats probably your problem right there. You believe 'choice' implies free will. It doesn't. For instance, computers make choices all the time..

Unless of course your contention is that computers have free will?

5cats said:
If you are talking about something else, let us know. This is a no-brainer, eh? Belief in predestination is listed as a definition of determanism in my dictonary.

Perhaps you should publish your dictionary and set the rest of the world straight?

5cats said:
Also, there's no way to "prove" either side, it's simply a matter of what you believe in, so quit wasting time asking for proof of the impossible.

Facts not in evidence. Has it been shown that there is no way to prove either side?

And seriously, I never asked you to prove anything.

I simply asked you to show your logic since your proposition seems illogical from the start.

If you wish to rely on X implying Y, you need to show why X implies Y unless it has been established that X implying Y is an axiom.

It certainly isnt an established axiom in this case. If it was, there wouldnt be a big debate on the subject in those philosophical circles that some people here hold so dearly...

5cats said:
I believe in free will because the alternative, predestination, sucks.

gee thats a great reason :rolleyes:

What is this compulsion you have with choosing a side? Why are you compelled to "believe" one way or the other... is the compulsion to pick a side part of your 'free will'?

It is ok to know that you do not know. A very famous and wise man once understood that.
 
Dusty Monkey said:
It is ok to know that you do not know. A very famous and wise man once understood that.
In fact, he understood that the only thing he knew was that he wasn't knowing anything - and for this reason he was acclaimed the wisest of all men (and a bit later the others killed him).

I probably don't have your knowledges (nor your feel about the english words) but I need an explanation: why determinism means predestination? My feeling is that determinism speaks about what will happen in the future according to current and past actions, while with predestination I understand other things. The question is whether the choice for current actions is free or not, not whether their result is deterministic or not.

Where do I get it wrong?
 
atreas said:
why determinism means predestination?

It doesn't.

Determinism does not imply predestination. Predestination is a religious belief commonly found in Christianity, Judaism, and so on...

Under a predestination viewpoint, no matter what path you take you will end up in the same place. This would apply regardless of the state of your "free will"
 
OK. And since determinism is irrelevant to predestination (so is irrelevant to free will), why do you fight to prove (either way) free will through determinism?
 
atreas said:
OK. And since determinism is irrelevant to predestination (so is irrelevant to free will), why do you fight to prove (either way) free will through determinism?

I don't...

I fight against irrational logic.
 
Ha! Dusty you are so busted!

http://www.m-w.com/dictionary/determinism

Now, please explain, in vivid detail, why you "believe" predestination is a "christian" idea. Where did you pull that idea out of? lol!
It has nothing to do with Christianity, it's much older than that.
I grow tired of your "attempt" at logical debate Dusty. You're just nay-saying anything other people say, which is just a cheap con. So either step up and explain YOUR beliefs, or be quiet.
 
Dusty Monkey said:
Under a predestination viewpoint, no matter what path you take you will end up in the same place. This would apply regardless of the state of your "free will"

Exactly what we've been saying.
So explain your definition of "determinism" or admit defeat.
 
5cats said:
Ha! Dusty you are so busted!

http://www.m-w.com/dictionary/determinism

Now, please explain, in vivid detail, why you "believe" predestination is a "christian" idea. Where did you pull that idea out of? lol!
It has nothing to do with Christianity, it's much older than that.
I grow tired of your "attempt" at logical debate Dusty. You're just nay-saying anything other people say, which is just a cheap con. So either step up and explain YOUR beliefs, or be quiet.

Oh brother... now I am a target to be "busted"... am I a heretic?

Stop frothing out the mouth.


Would you like an encyclopedia reference instead? I have two online ones..

http://www.encyclopedia.com/html/p/predesti.asp
http://en.wikipedia.org/wiki/Predestination

"busted" indeed.

Or you could open Encyclopedia Britanica on the subject. You will read the same information. Predestination is a religious concept.

A dictionary is not a science nor a philosophy reference. The fact that you thought a dictionary was a science reference proves my point. You are not equiped to debate on the subject because you don't even know where to look for knowledge.

This combined with your poor use of logic is very telling.

I forget.. were you the guy who had his own dictionary to publish??? :rolleyes:

You are right that somebody needs to be quiet...

..I will excuse your frothing mouth.
 
A dictionary is a very important reference for philosophy. How can you convey precise meanings without having precise definitions of the words that combine to convey your ideas?
The more in-depth and serious a debate becomes, the more important it is to define words carefully. I haven't done any competitive debating, but I believe that it is common to define the title of the debate before advancing any arguments. It's vital to know what you're arguing before you argue.
 
atreas said:
Don't isolate parts of texts because this tends to distort the meaning. It was, I think, very clear that this text was meant as an example of "logical catches" that may result from the attempt to explore God with logic. To give you just one "logical" answer, this (very old) text had the meaning to make humen redefine their ideas about "perfection". Also, you say that "everything" is equal to "the universe" - I din't; for example, I could say "this universe and all others that are now, and all others that can be, plus the part that will never be revealed".

Or, to say it in another (more scientific, if you like) way, I was trying just to point what Goedel proved for maths and mathematical logic - there can be statements that are true but cannot be proved as true - and that's LOGIC.

I saw the text as trying to show that logic could contradict itself when discussing God, and that therefore it was a useless tool in this context. If I was mistaken, then there's no problem. However, I was keen to show that the logic has not contradicted itself; it has merely shown that the premisses are contradictory. Logic does not contradict itself.

I'm not quite sure how the old text shows that there are statements that are true but cannot be proven so, but I have no problem with the possibility.
 
Come on guys, this sarcasm and one-upmanship does nothing to further this discussion. In fact it threatens the continuation of the discussion and it limits some of the progress that can be made. Your minds are better spent on this AI problem, rather than trying to demonstrate who's XXXX is bigger.


baddecision.jpg

Landlocked within that city are two Transports and a Frigate!

I think the first step to improving the AI is to fix the buggy decisions it makes. Like building a Lighthouse, Drydock and multiple Naval Units within an ice-bound city. This wasted production of useless buildings and units hold the AI back as the increased maintenance of these landlocked navies slow down its tech research. And although the AI flank attacks are a big improvement over previous AI military strategy, it still needs work. I've often seen the AI deploy a large chunk of it's garrison out of a besieged city. Sometimes this action has allowed me to take an enemy city when I would not have otherwise been able to. Then the 'deserting' units appear behind my lines and are easilly picked off by reinforcements. The AI should be able to calculate the odds of various actions and then decide which one yeilds the best results. It's ludicrous that the AI has the odds for success, but then ultimatly losses a city because it has deployed some of it's critical garrison on a fools errand. And even worse, because it happens so frequently, is when the AI has a string of units that are only turns apart from arriving at a common destination. Yet rather than delay the assault by waiting a few turns to allow the units to group into one powerfull assault force; the AI will instead attack in twos and threes over the course of a few turns. The AI is it's own worst enemy in these very common situations.

Now I'm not a programmer.. yet; but couldn't the AI be given a more finite decision making tree? Decisions based on the strengths and weaknesses of it's civ traits? And military strategy based on odds of combat success? Couldn't the AI be programed to follow a priority list when making these decisions. For example: Monty is at War.. he's got a string of units spread out from his capital to the front lines, and his forward troops have reported the enemy strength in a target city. So first he calculates how many troops should be required for a succesfull seige. Then he looks at the disposition of his troops, and caluclates how best to assemble an appropriate sized force. If he can't deploy enough troops in X amount of turns, then he follows another decision branch. Since Monty is a warmoger, perhaps he would be willing to take more chances than other Civs and so he might go ahead with an attack that has moderate chances for success. Whereas another Civ with other traits may decide that it's better to start a pillaging campaign in order to draw out enemy troops, in the hopes of lessing the city garrisons.

It'd be really cool if the AI could employ real tactics when engaging in underdog situations. Like an AI determines that it doesn't have a force capable of taking a city, and reinforcements would not arrive in a timely enough matter. So rather than suicide it's troops in high risk and pointless engagements.. it insteads sends out a forward expeditionary force to a non-targeted city in the hopes that the enemy will re-deploy it's troops to protect this threatened city. But in reality those forward troops are just a diversion designed to lessen the target cities garrison, while a sizeable force moves in for the kill. For this the AI would have to decide weither or not it has the forces in range to take a city. It decides that it does not and so another decision branch comes into play. So the AI looks ahead and determines how long it would take to assemble an appropriate sized force. It discovers that this would take too long and so the AI explores yet another decision branch. This one is to determine if the AI can field enough units if some of the targeted cities units were to be deployed out of the city. If it decides that x units leaving the city will increase the odds of success.. then it must determine how many units will be required to intice the enemy into redeploying it's troops. Now this may be too much for this game, but it serves to illustrate my point that I think the AI could be given more options which could result in better decision making.

But one thing to consider is that some posters (like warpstorm etc..) who are employed within the gaming industry and have some awareness as to their motivations.. have suggested that games with 'smart' AIs, are games that don't sell well. So the creation of a superior AI may not be in the best interests of the developer, publisher, and even the end user. I'm not sure thats a 'one size fits all' situation but I have seen evidence to suggest that it does occur. And as others have already stated.. in the case of the AI in Chess Games, the biggest difficulty in designing the AI was to make a game that the human player actually has a reasonable chance for success. And I have too much respect for the talents of Firaxis and particullarly the AI programing skills of Soren, to not wonder if they held back Civ4s AI a little.
 
To put it differently : if a (very) wealthy sponsor was able to rent the Deep Blue hardware and engineers and the whole Firaxis crew for, say, a year, with no financial limitaions, do you think it would be possible to come up with an AI that pretty much plays the way a human do (real initiative, real adaptation, playing to win, ability to evaluate the real balance of power at any given moment to support or hamper different players, selecting a victory condition best adapted to its personality and position in the game, etc...) ?
Yes I think this could be done. And I think it is something that could Possibly attract the attention of profit motivated investors. Think of the publicity for Firaxis and 2k if Sid and/or Soren were to challenge DeepBlue. And how about a contest between competing hardware? DeepBlue and a couple other super computers taking on the role of the human player to determine which computer is best? This would give the creators a chance to prove the abilities of their creation by pitting them against other machines. I think it would be great fun, and given the popularity of the Civ franchise, I think it would attract much interest. I also think it would attract alot of attention outside of the CivCom. But I've no idea if this would really be economically feasible. But if money was no object, then I think the project would be succesfull. Where there is will.. there is a way. But the will of those with the money are too often limited by the need for economic return.

I think the Civ4 AI could be programmed to make use of DeepBlue, similar to the way DB prosicutes a Chess game. With all that storage and processing power, why couldn't it be programmed with sufficient data to allow it to predict the outcomes of many thousands of differant possibilities? Wouldn't it just be a matter of writing a program to play the game in auto-mode and then recording what happens after hundreds and hundreds of games? And supplementing this data bank with human played games? Then compile all that data into a complex decision making tree, that allows the computer to compare results from various actions to determine the best course of action? Isn't that how DeepBlue plays Chess? By looking at a plethora of decisions, then making a decision based on long term predictions and proven successes?? I'm sure that every possible scenario would be impossible to predict and to program. But I would think that the logic and decision making process that we humans use, could be simplified in such a way as to narrow down the possible choices enough that the 'puter can be programmed with sufficient data/experiance to be able to make more 'informed' choices. It doesn't need to know everything. It just needs to know how to analize each choice and then it must be taught how to prioritize the resulting possibilities that are contained within it's database.
 
White Elk's screenshot gives a good example of an area where I'd have though it should be fairly to improve the AI. Long term strategy may be awkward to program, but it shouldn't make decisions that are so obviously and immediately wrong. It surely can't be that hard to program an AI not to build ships and drydocks in a city that is icebound? Similarly the AI seems to get things like basic use of city tiles wrong, which it ought to be fairly easy for a computer to be able to maximize. For example on plenty of occasions I've seen the AI using two 2:food: 1:hammers: tiles when theres a 4:food: tile and a 2:hammers: 1:commerce: tile available. Isn't this kind of optiomization something a computer should be good at?

The AI doesn't make quite as many blatantly wrong decisions as it did in Civ 3, but it still makes plenty, and I'd have thought these would be fairly easy to fix compared to trying to improve its overall strategy, while still giving a boost to its performance.
 
A very welcome return to the topic subject.

First of all, the example screen shows clearly that the AI in CIV 4 doesn't evaluate action results, but instead has a predetermined list of things to do: things that usually are good choices in some situations (it is a coastal city, and usually in coastal cities it's good to build lighthouse+harbor+navy units). This, of course, isn't AI programming - it's a simple "IF-ELSEIF-ELSE" statement.

I have the view that AI can be programmed in a way that beats any human player with even odds (of course, AI discount levels like Emperor would be out of the question). Still, that would require a way of determining whether action A is better than action B in any given time: and the "tree analysis" method is both inefficient and memory consuming. In most areas that this has been achieved it was achieved with the help of some "learning procedure", with which AI learns by example what is generally good and what is generally bad (in other words, learns how to evaluate the results of the actions). Unfortunately, this isn't easy and also needs the careful study of games played by very good (human) players - something that can't be easily achieved prior to the release of a game. Of course, even if you don't go that far, a good set of fixed rules will help much - such as "don't build a naval unit that can access less than X tiles", or "don't build a worker that has no tile to work on".

As for the fun aspect of the game, I admit I also had the same ideas (that an extremely difficult, or almost impossible, to beat game would be no fun at all) - but perhaps it's an illussion: if you managed to create the "perfect playing machine", then you could easily redefine the difficulty levels as levels with different bonuses for the player, with the Deity level being an equal challenge. But another fun aspect of the game can't easily maintained: the diversity in the decisions of the various civ leaders. Since all of them would have to play in an optimum way, most probably they wouldn't take those sometimes irrational roads that make the game unpredictable (and replayable).

But definitely in war AI needs some improvement ASAP. It is both predictable and plainly stupid (very bad combination of "traits"), and can SO easily be exploited.
 
Well, been away from "my" thread just a bit, and it's moving very fast... Had a lot of catching up to do. Here are a few things I picked up on the way. Sorry if some of them now seem like yesterday's story (not that this whole thread has been very embarassed by consistency, but hey, at least it's fun :) ) :

(Quote:
Originally Posted by NapoléonPremier
One thing we should probably never do with science is say : "that's never gonna happen ; it can't ; it won't."
If you had told folks in 1900 than less than 70 years later men would walk on the moon, they would probably have laughed at you. Or at the beginning of World War II that about five years later you would be able to destroy a whole city with just one "atomic" bomb, same thing. And so on.)


dbergan said:
This comment needs a little perspective. If in the 1600s you had said (as many scientists did) that you would never get gold from lead via a chemical process you would have been right. Later on we found out that both gold and lead were elements. Science isn't magic. It can't do everything.

But you can never know in advance. Nothing absolutely warrants that, some day, for instance at the quantum level, it won't be possible to change lead into gold or into anything else (even if the energy required to do so doesn't make it worth it). Saying "it will never happen" is the worst attitude you can have in science (and probably in life in general, for that matter).


dbergan said:
I think there is one thing that people are overlooking... the amount of time it takes the AI to make a move. Currently it is programed to do a civ's turn in under a second (most of the AI's time is updating the graphics on our screen... when you don't see a civ, their turn is over almost immediately). Computers have the processing power to do more, but do players want to wait? Are we willing to wait 10+ minutes for a good move like what a chess program does on its top settings?

That is very true. Given the number of parameters to deal with and the extremely fast response time, the Civ AI can actually be viewed like some sort of genius in that regard.


Breunor said:
As an aside, (this maybe belongs on a different thread, sorry!) people ask me about chess vs. CIV. Its hard to answer -- I don't know myself. To me they are totally different experiences. Now that I'm in my 50's the chess scene is a little more recreational. How do you view this?

Chess is a noble game. But it's boring. Because, basically, it's maths. And on top of it now that most chess AI can kick your ass to the ground just like that and spit in your face, it's also depressing.:D

dbergan said:
Is the SDK going to allow AI manipulating? I think that would make things very interesting... could even get some AI vs AI tournaments going to see who makes the best script.

I think what will be REALLY REALLY interesting when the SDK comes out, in regard to the subject of my OP, is to see -if the software allows it- if some really motivated people will be able to come up with a significantly different way of running the AI, or if it will "just" be tweakings of the existing AI.

Pawel said:
The big decisions are how to interact with the others. What does it mean that I see forces on my border? Should I attack someone who is overexpanding and probably is vulnerable? The AI rarely does this. It is bad in assessing who is the greatest threat, and tends to deploy forces evenly, not taking into account that it might be better to have more either at a central location from which it could reinforce a sector under threat using inner lines, or having more units where the threat is the largest to begin with. In forming relations it usually rarely shows a strategy. To simplify a little, it just dislikes its neighbors.

Well, see, after all the AI is very human-like...;)


(Originally Posted by warpus
That's fine and dandy, but how exactly do you quantify how successful or not a decision was?)

Zombie69 said:
Easy, pit different AIs against one another, and then from the pool of different AIs you have, use a genetic algorithm so that AIs that won more games get to "mate" more and have a bigger percentage of their "genes" (probably the weights of the nodes in the neural net) represented in the next generation. Do this over a few hundreds of generations and you can get something really good.
That's how i did it anyway when faced with a similar task in an AI course that i took. It works very well too.

Well, that can't work, because the theory of evolution is a hoax. You would have to CREATE a perfect AI. In seven (or six) days preferably. Just kidding. Sorry.:cool:


warpus said:
My line of reasoning goes as follows: Humans are machines. What would prevent other machines, even if they're designed by humans, to be sentient as well? Are you saying humans are special in some regard that can't be duplicated?

Well, yeah, they're different. Everything that is alive is different. We're not just "machines", or not only. "Life" is something very very special (and btw note that the leap from inanimate matter to living matter is still officially a scientific mystery). If "life" is comething that cannot be created (it wasn't created in man or animals either), however, I think it seems very likely that someday we'll be able to create a machine that will perfectly emulate a sentient and intelligent being. So it won't really be sentient, but for all practical matters it will be exactly as if.

atreas said:
If you want to look for a better "game example" to see what AI can and can't do, you should better look for other kinds of games. One more suitable example is backgammon, where you also have the "stochastic nature" of the dices. Another example could also be bridge, where you have the "not known world". But definitely chess isn't the appropriate pattern for civ (it's too different in two crucial aspects). If you look at AI achievements in such games, you will find examples like the remark of the World Champion in backgammon after a match against a program: he said that "it's STRATEGIC evaluation is far better than mine" (note that he is not talking about AI's CALCULATING ability). That means, there are statistical methods available that could create an AI that plays extremely strongly in civ - the main point is that AI must first use some method of statistical learning.

Thank you. That's an important element of answer to my original question. So it would be possible, with the appropriate investments, to get a strategically smarter AI. That's cool.


Zombie69 said:
There's a lot of stuff in quantum mechanics that i don't agree with, and that i truly believe will be disproven in due time. You see, i believe in absolute determinism.

Well, "absolute determinism" was a fad of the 19th century, when modern science was really beginning to explode. Since all that was known was the molecular level, French mathematician Laplace formulated it this way : if you considered a spirit (which he called Daimon), who could know exactly the position and movement of every single particle in the universe, then he would know the exact past, present and future of the whole universe. Of course, since then, science has complexified a lot more, and precisely quantum mechanics and the like have hinted (and have just begun to do so) that things are much more subtle than that - as some subdiscussions in that thread also illustrate. And just "not agreeing with" quantum mechanics is a bit insufficient. It's a body of scientific work. You have to give a more convincing body of interpretations of the facts it accounts for if you want to prove it wrong.

Buckets said:
So what if the only AI smart enough to challenge a human doesn't want to play?


That's funny...:lol:

d80tb7 said:
sorry about all physics stuff but I'm good at physics and rubbish at CIV so it's one of the few times I get to post things i know about on this forum


Don't apologize... In real life, you're probably the kind of guy that devises the nukes and so on with which Montezuma, Caesar and the like try to blast each other's head off...

whb said:
Biologically, most of the processes going on in the brain do not look very much like logic at all. Which is one of the reasons why students struggle so much trying to learn how to do formal mathematical proofs.

Humans use very arguable and informal methods in their decision making, often based much more on experience than axioms and inference. Watch an episode of Deal Or No Deal, and you will see people changing their tactics as to how they pick a number of a random box based on whether choosing birthdays gave a good result or a bad one last time.

Actually, using pure logic would be a jeopardy to survival in real life. On a daily basis, our brain mostly uses what is called a "heuristic of representativity", based mostly, like you said, on experience, which allows fast and, most of the time, efficient decision. If you try to make a purely logical decision when the lion comes near you, you'll very likely be dead...(and actually, maybe some people did, but they died, so their genes didn't get passed on, so you can say that maybe "purely logical" brains have been "selected out" by evolution...:mischief:

dbergan said:
Actually, Taosm suggests exactly what I have been talking about... that there is a "way" (standard) of the universe that we are supposed to walk in and not step outside of. That it is only by the Tao that things can be understood.

Thanks to Civ 4 we all know about Taoism.

Yeah, I'm sure that Lao Tseu would be very proud to know that all of his work to unveil the mysteries of the universe has ended up as a gimmick in a video game...:D

atreas said:
Wait a bit, in a few moments UFO will enter the topic and explain us everything - especially how the speed of light affects the free will that affects the artificial intelligence that affects Kasparov that affects CIV 4.

:D

Dusty Monkey said:
I care because it is the path of least resistance.

Well, it doesn't seem logical. The path of least resistance would seem to be, precisely, NOT to care. It's much easier.

Dusty Monkey said:
Having a justice system does not mean that society believes in free will. Society as a whole does believe in the concept, but that is not what it means. Another logical fallacy of some sort here. One does not lead to the other.

Well, maybe not ALL justice systems, but in our societies it's linked. We try people as being responsible individuals, that are capable of choosing between right and wrong. That's why (theoretically) we don't try the mentally impaired.

Dusty Monkey said:
People believe that they have been abducted by aliens too. A lot more than one person believes it. Is that observation also credible?

Have you noticed how these "abductions" happen almost exclusively to Americans ? Strange. Guess the ETs don't really care for the rest of us. ;)
 
atreas said:
And finally: the attempt to prove logically the existence of God has a long story of unsuccesful tries. I can safely bet you will not either manage it here, or convince each other. Just as a gift, I give you one simple example of what logic could create when you try to use it for subjects like infinity and God.

God is perfect and includes everything (by definition). There is also devil, who isn't perfect (again by definition). Since God includes everything, he must also "contain" devil (obvious deduction). Since devil isn't perfect, there is a part of God that isn't perfect (again obvious deduction). Since a part of God isn't perfect, he isn't perfect so he isn't God.

So, I wish you good luck with your logical attempts.

This kind of stuff has been around practically as long as thinking itself. The "classic" "proof of the existence of God" is more or less as follow (I think it might be from Descartes) : "God is an infinite being ; man is a finite being ; a finite being cannot conceive of an infinite being (by definition) ; so if man can conceive of an infinite being it's because an infinite being has put it in his mind ; hence God exists."
It's fun, it's stimulating for the brain, but it doesn' prove anything and it doesn't get you anywhere. Plus, in the Middle Ages, people (theologians and the like) were getting really obsessed with these things, and I wouldn't be surprised if some people found themselves in very big trouble or even burnt at the stake for being at the wrong end of one of those logical gizmos (because you can as easily "prove" that God doesn't exist...) :eek:

5cats said:
**nod nod**
Yes I've seen that text, or something like it, before.
I did mention a few times that many things are subjective
Which 'effect' are you talking about? The "God's not perfect" part? It's irrelivant if you allow for the idea that God can choose to be imperfect. Or if being imperfect is part of His Perfect Plan. Or that he's imperfect in a perfect way. Or the painfully obvious: that God only seems to be imperfect from our tiny human perspective, but is in fact Perfect.
And yes, 'logic' is highly overrated in discussions about God, lol!

Another famous one that really drove people crazy in its time is the "Heavy rock" paradox : if God is omnipotent, can he create a rock so heavy that even He can't lift it ? If he can't, then He is not omnipotent. But if he does and then he cannot lift that rock, then he is not omnipotent either...:hammer2:
Since then, the works on the definition of "correct" logical and mathematical systems, notably by Gödel and Bertrand Russell, have cleared this mess (a finite and coherent logical system cannot make an hypothesis about itself, or it's undecidable)

5cats said:
Zeno! Of Elba that is. He's my hero and I am of the opinion that his Arrow Paradox is still valid.

It's still valid as a paradox. Meaning something that is false, but which has the appearance of truth. But it has long been proven that a sum of infinite numbers can give a finite number.:old:
 
Dusty Monkey said:
Has the many-worlds interpretation of quantum theory been disproven while I wasn't looking?

You've got a point. If the many-worlds interpretation of quantum theory is correct, then the universe is most likely deterministic (emphasis: most likely) - and we have no free will.

However, most physicists prefer the Copenhagen interpretation. The many-worlds interpretation doesn't really have many followers.

Dusty Monkey said:
My journey into quantum theory is based on a desire to understand how quantum computing will work - are you under the opinion that quantum computers will be non-deterministic?

You could easily design a non-deterministic quantum computer (at least on paper, for now). It wouldn't be useful though, so what's the point?

White Elk said:
I think the Civ4 AI could be programmed to make use of DeepBlue, similar to the way DB prosicutes a Chess game. With all that storage and processing power, why couldn't it be programmed with sufficient data to allow it to predict the outcomes of many thousands of differant possibilities?

This has already been explained. Such a tree can't even be constructed for something like GO. Civ4 is many magnitudes more complex than GO or chess.

A Civ4 AI has to use a different mechanism for making decisions than chess, unless we are dealing with quantum computers, which we're not.

atreas said:
First of all, the example screen shows clearly that the AI in CIV 4 doesn't evaluate action results, but instead has a predetermined list of things to do: things that usually are good choices in some situations (it is a coastal city, and usually in coastal cities it's good to build lighthouse+harbor+navy units). This, of course, isn't AI programming - it's a simple "IF-ELSEIF-ELSE" statement.

It isn't technically AI, but people are going to call it AI anyway.

IMO the current way the AI "thinks" could be complemented with an exception list. I think you're right - the AI is pre-programmed with moves that are usually good, but sometimes aren't. If each such action was complemented with a "don't do this in this rare case", the AI would get smarter.

NapoleonPremier said:
Saying "it will never happen" is the worst attitude you can have in science (and probably in life in general, for that matter).

But the point is that in some cases we can say with absolute certainty that "this is impossible". For example, we know with absolute certainty that a homo sapien will never survive in deep space without a spacesuit. Sure, with genetic modifications you could perhaps pull this off - but then you could argue that the subject in question is no longer homo sapien.

I agree that saying "this is impossible" is usually shortsighted, but in some rare instances it is the correct thing to say.

NapoleonPremier said:
Well, yeah, they're different. Everything that is alive is different. We're not just "machines", or not only. "Life" is something very very special (and btw note that the leap from inanimate matter to living matter is still officially a scientific mystery). If "life" is comething that cannot be created (it wasn't created in man or animals either), however, I think it seems very likely that someday we'll be able to create a machine that will perfectly emulate a sentient and intelligent being. So it won't really be sentient, but for all practical matters it will be exactly as if.

Life is special in that regard - it is special because it's alive.. which in my opinion is like saying that an apple is special for it's apple-like qualities.

Yes, going from non-life to life IS a mystery, but let's look at the evolution of 1 single human. You start with a single cell - and end up with a sentient being. Somehow natural process are able to take one tiny cell, which can be easily chemically analyzed and quantified, and turn it into a sentient machine. However this process works, and however sentience is accomplished, the fact is that it's possible, since we can witness the creation of a sentient machine first-hand.

I'm not suggesting that we build a machine that emulates the human brain at all. I'm not suggesting how we build this machine at all. My only claim is this: since natural processes are able to construct a machine that is sentient (a human), ergo we can construct a sentient machine using natural processes as well.. since we know it's possible. I'm not saying how we'll do it, I'm just saying that it's possible.

And you could easily say that humans aren't sentient - you could say that we're merely emulating sentience. For all intents and purposes - we're sentient. It doesn't matter if we're technically emulating sentience, or if we are sentient. These are just semantics. If we construct a machine that for all intents and purposes is sentient - it is sentient, no matter how that sentience was achieved.

NapoleonPremier said:
Well, it doesn't seem logical. The path of least resistance would seem to be, precisely, NOT to care. It's much easier.

One could argue that this is precisely why 95% of humans have an irrational belief in a God - it's much easier this way.
 
Status
Not open for further replies.
Back
Top Bottom