Computers can play games from reading manuals, or, this thread isn't quite off-topic

I don't see how infers is a bad phrase, what would you instead suggest?

I dunno, "infers" suggests interpretation to me, which requires understanding. The whole sentence probably needs to be reconstructed to reflect the purely functionalist nature of the act.
 
There is software that already tries to understand the grammar. The next logical step would be "inference". It is not just breaking down the sentence but how each word reacts to the other words and why.

Doesn't search engines keep databases that compare searches and try to finish the "gist" of what you are looking for by giving a list of things to choose from?

The ability to process and catagorize to manupulate response gets closer with faster CPU's and the ability to get to stored information quicker.
 
I don't get one part.

Before they used the machine learning program, the computer were able to win 46% of the games- how? If I read correctly, the computer did completely random moves and actions and still won almost half the time??

I'm not sure if a 79% probability of winning with the machine learning program is that big a deal then...

Or did I misunderstand?

Anyway, it sounds cool. Reminds me a bit about some of the stuff I did, but I was only concerned with categorising random documents.
 
I dunno, "infers" suggests interpretation to me, which requires understanding. The whole sentence probably needs to be reconstructed to reflect the purely functionalist nature of the act.
So couple questions
1. Why does interpretation require understanding?
2. Why can't we suppose that a machine has some level of understanding?
 
So couple questions
1. Why does interpretation require understanding?
2. Why can't we suppose that a machine has some level of understanding?

I guess it depends on how strictly you want to define interpretation and understanding. At a certain level, I think interpretation requires agency, which is compatible with and necessary to this kind of interpretation because meaning is not merely conceptual. It also has a tacit and practical dimension that must be grasped or appreciated phenomenologically. I don't think there is any fundamental barrier to machines achieving this 'high' level of capacity for interpretation, but I think they're not there yet. They probably need to be able to learn by means of sensory-motor input and participate to a significant extent in social interaction, the latter which makes language intelligible to us in the first place.
 
Well, that would be counter to Searle's Chinese Room thought experiment.

Personally, I think we need to have a little bit of higher level abstraction to talk about this stuff. When we play a game against an AI we give it the status of having intentions, beliefs, etc. I don't think that's something to be chided.
 
Well, that would be counter to Searle's Chinese Room thought experiment.

Personally, I think we need to have a little bit of higher level abstraction to talk about this stuff. When we play a game against an AI we give it the status of having intentions, beliefs, etc. I don't think that's something to be chided.

What's your take on the Chinese Room experiment and what's your objection to Searle's argument?
 
What's your take on the Chinese Room experiment and what's your objection to Searle's argument?
My take is the understanding is delocalized from the person, but it still is an understanding.

As for his actual argument, I don't really understand what exactly he's trying to get at, but note that there shouldn't be a particular reason to favor neurons over pens and paper. Whatever objections he has should apply to both or none.
 
Am I the only person here who questions the wisdom of teaching a supercomputer how to conquer the world? I mean, shouldn't we have taught it to play The Sims or Diablo instead?
 
I don't get one part.

Before they used the machine learning program, the computer were able to win 46% of the games- how? If I read correctly, the computer did completely random moves and actions and still won almost half the time??

I'm not sure if a 79% probability of winning with the machine learning program is that big a deal then...

Or did I misunderstand?

Anyway, it sounds cool. Reminds me a bit about some of the stuff I did, but I was only concerned with categorising random documents.
As I understand it, machine learning without the strategy guide lead to a 46% success rate, and with the guide, 79%.
 
One test might be whether the computer could analyze and follow a set of instructions for an unfamiliar task. And indeed, in the last few years, researchers at MIT’s Computer Science and Artificial Intelligence Lab have begun designing machine-learning systems that do exactly that, with surprisingly good results.

Fantastic progress - but toward what? :eek:

I for one welcome our new computer overlords.

I, for one, think it's high time we thought through the ethics of this. We're on our way to making intelligent agents out of things that will (unless we take extreme care - and probably even then) have radically alien thought- and motivation-patterns. And those agents will evolve - literally - with much higher rates of "mutation" and probably faster reproduction. At first, that will happen under close human supervision. Until that becomes uneconomical...

humorous video

Am I the only person here who questions the wisdom of teaching a supercomputer how to conquer the world?
:badcomp:...:assimilate:...:run:...:bowdown:
(Wow, there sure are a lot of appropriate smileys to choose from)
 
am i the only person here who questions the wisdom of teaching a supercomputer how to conquer the world? I mean, shouldn't we have taught it to play the sims or diablo instead?

WOULD YOU LIKE TO PLAY A GAME OF THERMONUCLEAR WAR?
(y) (n)
 
WOULD YOU LIKE TO PLAY A GAME OF THERMONUCLEAR WAR?
(y) (n)

No. The only way to win is not to play.

(seriously, that movie was as dumb as Sarah Palin's 'tard baby. The only good thing about it was Ally sheedy... mmmmm... :love:)
 
Back
Top Bottom