All right, here we go.
I'd appreciate if we wouldn't dumb this down to the level of echoes.
I understand that that kind of talk is annoying to enthusiasts, but if you want me to stop, you have to stop making essentially the same point for me:
AI operates on 10000x more data than average human
10x the library of congress
LLM generative AI does what it does by drawing on a vast database of
things people have said. Do its programs conduct
operations on that speech? And do some of those operations resemble what humans do when they use language? Yes. BUT. The programmers didn't teach AI to speak. They gave it a really huge pile of instances of speaking, and then created procedures by which it could recombine the material in that dataset.
Clinging to some allegedly irreproducible thought derived from temporary absence of sensory interface as an ultimate proof that AI categorically can't think
I wasn't doing that. I providing
one kind of thing that we call "thought" (i.e. we use the word, I think I'll have some apple juice) and saying that AI cannot think that way, doesn't have
that kind of thought. This was only in refutation of the claim that, if we're denying AI the label of thought, we're doing so on behalf of only an Einstein-level definition of what constitutes thought (which would exclude the vast majority of humans as well). No. There are very ordinary, everyday thoughts that every human being has that AI doesn't have. (That we have no need for it to bother having; we have need for it to say "I think I'll have apple juice this morning"). More on this in a future post.
I’ll take a Ritz — buttery and smooth pairs well with just about anything.
I'm not surprised that AI could give an answer, but again, remember why I posed the question of which cracker: that is a simple question that exposes
simple echolalia (that of a parrot) as not being thought. I have granted that AI is sophisticated echolalia, and so to expose
it as such would require more sophisticated techniques than to expose that a parrot as such. By the way, I'm also not surprised it picked Ritz. More on that later.
Ok, on to the real point.
This discussion has reached the stage where we can no longer ask the initial question “Does AI think?” And that is because Moriarte has offered a nice formulation that advances our thinking on the matter, with his “domains of think.” What he points out is that there are various different operations that go under the broad label of “thought,” (and that AI is good at some of them and not (“yet,” he would say) as good at some others).
That’s something that often happens in the course of thinking an issue through (using back-and-forth exchange; that’s what I think we’re all doing here: thinking an issue through using back-and-forth exchange). One says, “we’re looking at X as though it’s a monolith, when in fact it’s made up of many things.” And if everyone agrees, it enables a new kind of approach to X.
So now we can ask the better question, “what kinds of things that have traditionally gone under the label of the word “thinking” does AI do better than humans, just as well, not as well?” (And that’s what Moriarte’s follow-up chart went on to do). I don’t know whether, having given answers to all of that, we will be able to go back to the old question and say “on the whole, I would say it is/is not thinking,” but it’s at least a possibility.
We should be clear about the nature of our core intellectual task here (with the big question); it is asking whether a particular thing (what AI does) fits within the commonly accepted definition of a word (and concept): thinking. Yes,
@Akka, the starting point for this task will be what we have meant when we have previously used that word and concept in connection with a human cognitive process. One, because the only creature that bothers drawing definitions is humans. Two, because “man is the measure of all things.” Three, because, until three years ago, no one was claiming that anyone but humans could think, so naturally the definition would have millennia of opportunity to concern itself with human thinking processes. Four, because it is humans who are the interested party in this matter, who care about the question “can AI think?”
In the end, we don’t have to limit ourselves to that: the old human-based definition of the verb “to think.” We can say, there’s a new kid on the block and that kid can do cognitive operations that humans can’t do (My guess about the content of the video, just from its title). And further, we can say, “so if we stretch the definition to account for all the things that both entities can do (or devise a new word for that totality), then human thinking no longer needs to serve as the basis of our definition.” But our
starting point will be what the word has meant to humans and about a human activity, up till now.
As soon as we ask our new question, we get results along the lines of what Moriarte laid out. I’ll make a Venn diagram. [To be posted later because that takes me some work]. For purposes of this first Venn diagram, I’ll treat human thought as though it is an established 1) norm and 2) maximum. Don’t worry. In time, I’ll allow you to challenge this first Venn diagram, even myself concede that it’s not accurate.
Enough for now. This thing’s already a wall of text. You can respond, of course, but my answer might be “Hold on. I’m getting to that.”