Should AI be regarded as Turing-test verified?

Kyriakos

Creator
Joined
Oct 15, 2003
Messages
78,218
Location
The Dream
I don't think it is a good idea at all, to claim that something is AI, or intelligent while a machine, just because some observers were tricked by a programmer to think the other object was actually intelligent or sensing anything at all.
I mean i want to have AI in computer games which at least does not kill itself through monumentally bizarre (for human standard) choices, but i surely am aware that nomatter what i am having a game against a computer program and not something sensing anything from the game. The stupid hordes from the east don't notice we are on a map of Cilicia or whatever, they just move due to the variables bounding them in finite ways through a script in the game engine's language. They do not sense anything at all, but i don't bother with this in a game.

I do bother in actual science, though. So i wanted to ask if you are of the view that having some machine pass a test on the Turing-related lines is at all worthy of being tied to the issue of artificial intelligence, given it has nothing to do with any intelligence by the machine.
 
As somebody who has seriously considered asking the Queen of Egypt (I forget her name... thingy... whatever...) out on a date, I'll admit to thinking I was playing real people in Civ 3 and 4.

I suppose it depends on how much you can immerse yourself in the fiction.

And I've had deep meaningful conversations with many an automated telephone service.

So, I doubt a mere Turing test failure is going to make me think it's not a real person at the other end of the line.

What happens when you read a novel, Mr Kos? Do you see only lines of print? Or are we thinking at cross-purposes now?
 
^Immersion is indeed a vastly interesting human ability, but my issue is with people claiming that a machine which senses absolutely nothing at all should be deemed to some degree an AI if the effect manages to fool people into thinking it is a human. In computer games this is benign cause we already know it is a computer controlling the enemies. But what if it was marketed as human, and then people with neckbeards would argue that the artillery bombard in civ3 shows actual computer intelligence? (btw, the AI in civ3 manages to not use artillery almost at all and never builds it either) ;)
 
Given that people have programmed the "AI" in your games, in a sense you are playing real people. It's just a bunch of people who've figured out every sensible response to whatever moves you make.

Or isn't that so?

How could you distinguish between the "AI", and a computer set up to look as it normally does but was in fact controlled by a remote operator?
 
Given that people have programmed the "AI" in your games, in a sense you are playing real people. It's just a bunch of people who've figured out every sensible response to whatever moves you make.

Or isn't that so?

That is more like fossils of people's thoughts, presented by, i don't know, some puppet running in a cartwheel until the battery runs out. :borg:
 
I don't think it is a good idea at all, to claim that something is AI, or intelligent while a machine, just because some observers were tricked by a programmer to think the other object was actually intelligent or sensing anything at all.
I mean i want to have AI in computer games which at least does not kill itself through monumentally bizarre (for human standard) choices, but i surely am aware that nomatter what i am having a game against a computer program and not something sensing anything from the game. The stupid hordes from the east don't notice we are on a map of Cilicia or whatever, they just move due to the variables bounding them in finite ways through a script in the game engine's language. They do not sense anything at all, but i don't bother with this in a game.

I do bother in actual science, though. So i wanted to ask if you are of the view that having some machine pass a test on the Turing-related lines is at all worthy of being tied to the issue of artificial intelligence, given it has nothing to do with any intelligence by the machine.

"Should AI be regarded as Turing-test verified?"

Yes, if it passes the Turing test.

What on earth is the actual question here?

For me, my real concern is that as a human, I may fail the Turing test if I suck too much.

The Turing test isn't really applicable to humans... and as computers get better at passing, the average human pass rate will drop to equal AI pass rates.
 
Anyway, i prepared a very brief elaboration on what was meant by "sense" :)

Thank you all for the interesting replies :D

I should make clear what i mean by "sense":

-It seems quite likely that any biological being possessing at least some basic functions (eg ability to move or adapt) does in theory (we obviously cannot experience that directly) some sort of sense of an enviroment it is in, in whatever manner. This obviously does not mean that (for example) an ant senses the ground it walks in in a same or similar way than you watching it do. It merely means that it stands to reason to guess that the ant has a sense of something, which sense likely is of an environment in whatever terms it is instinctively felt by the ant.
A computer is not evidently having this sense at all. I am thinking of the computers ability to run a program (eg present the first massive number of prime numbers) in much the analogous way that i view a rocks "ability" to fall from a great height if you drop it there: the rock is running the program known as 'gravity', and the computer is running some analogous program you fed it.
The huge difference between things that seem to sense something, and things that do not, is that the latter appear to be at absolute zero regarding any prospect of ever having a sense at all.

Which is why, in my view, actual AI is impossible, and an AI can only become a reality if it is tied to some DNA which would provide (in ways i suppose not evident or calculable from the start) that sense ability. But that would be again from the DNA, and not the computer itself. A bit like maiming a frog and then implanting some mechanical/robotic arm to it: the frog is what senses stuff and may move the arm at some times. The arm senses nothing at all.
 
I mean i want to have AI in computer games which at least does not kill itself through monumentally bizarre (for human standard) choices.

The thing is that no video game out there uses Artificial Intelligence - in the sense of what is meant by "Artificial Intelligence" when a computer science says it. If we ever get true AI, it's not going to be like anything we've ever seen in a game - both in terms of implementation and behaviour.

I don't think that the turing test is a perfect test or maybe even a good test for sentience - but eventually we're going to need a test to figure this out. Turing test is a good place to start, IMO.
 
Let's say we can make a computer model of a neuron that behaves the same way a real one does, if we had enough computing power could we not reproduce completely what all the neurons in an ant does? If we hook that simulation up to a robot ant would it not behave just like a real one?

If our robot ant and a real ant act the same bothoutside with behavior and inside with neural computations why shouldn't we think that they both sense in the same way?
 
Let's say we can make a computer model of a neuron that behaves the same way a real one does, if we had enough computing power could we not reproduce completely what all the neurons in an ant does? If we hook that simulation up to a robot ant would it not behave just like a real one?

If our robot ant and a real ant act the same bothoutside with behavior and inside with neural computations why shouldn't we think that they both sense in the same way?

You may get some answer by Socrates to Theaetetos:

One cannot claim he has complete knowledge of something if he still knows (even "perfectly") that something up to a part which is not the final. Eg not the final fundamental tiniest bit of it. Likewise how would you model a neuron if not by stopping at some pre-final point? [cause it seems less likely, and surely not certain, that (here biological) matter ends at some level of particle/part]. :)
 
I don't think that's very problematic. That is a neuron could be modeled well enough to be functionally identical.

I see objections to the possibility of true AI as reminiscent of vitalism (the idea that life has some special substance different from normal to make it work). They note that life is complicated and hard to understand so they presume some magical thing occurs. I don't believe in magic, and when we figure put how neurons and action potentials lead to intelligence we'll be able to do it with transistors and bits too.
 
^That is very cool, but has 0% to do with what i wrote :thumbsup: Not modelling the final (if a final such thing exists) foundation block of something is not really logical to be deemed as an approximation that allows for replication of that thing. Let alone when we speak of actual DNA/biological matter. A bit like claiming a 3d modelling program happens to allow you create spheres cause it actually includes the full calculation of digits for pi. It doesn't, but then again that setting does not aspire to let you replicate an irrational number anyway/the goal there is not replication at all, just decent tricking of likeness which is fake from the start :)
 
A bit like claiming a 3d modelling program happens to allow you create spheres cause it actually includes the full calculation of digits for pi. It doesn't,

Sure it does, given sufficient computing power a 3d modelling program can create a molecule-for-molecule replication of an arbitrary sphere that exists in reality.
 
Sure it does, given sufficient computing power a 3d modelling program can create a molecule-for-molecule replication of an arbitrary sphere that exists in reality.

?

No actual sphere exists 'in reality' anyway. Much like no other actual perfect form, or perfect equal to anything. Imagine two lego soldiers. To the human eye they can seem entirely identical. They aren't cause the machine producing them in each stage has an accuracy up to some fractions of milimeters at best.
With pi the approximation has other issues too, obviously, cause there is no set end in the digits and non-periodicity to begin with.
 
The question is why should that minuscule inexactness matter? I can't calculate the behavior of all the electrons in my pocket calculator but I can still say exactly what it reads out when I enter a given input. That's where I think you're hiding magical thinking. What is it about neurons that makes them able to bring about intelligence that you can't do with electronics?
 
Back
Top Bottom