The AI Thread

Yes, I agree, AI doesn't think.

I don't agree with this. I don't think holding AI to standards of Einstein, while neglecting billions of people below that standard is anything but a comedically desperate way to approach intelligence categorisation. It is an argument taken to absurdity by a respondent dead set to prove that what acts as intelligence in practice cannot be intelligence in theory. AI uses reasoning, obviously. Far better than most humans. Reasoning is not a divine ability. It's a set of several mathematical instruments, which, when assembled together enable problem-solving. Any entity that can solve complex real life problems deserves the label "thinking". Saying that AI is excluded is like saying that a bird isn’t truly flying because it doesn’t flap its wings the same way as an insect, or that a submarine doesn’t “swim” because it doesn’t use fins. We don’t deny functionality simply because the underlying mechanics differ from familiar biological models. If the outcome aligns with what we mean by “reasoning” or “thinking” in everyday practice - deriving conclusions, adapting to new information, solving problems - then refusing to apply the label to AI is little more than wordplay. The criteria for intelligence should be anchored in observable capability, not metaphysical gatekeeping. Otherwise, we risk turning the concept into a kind of theological doctrine, forever reserved for humans alone, even as evidence of non-human intelligence (both biological and artificial) keeps mounting.
 
AI isn't replacing art, not yet, not meaningfully. AI isn't replacing writers.
AI is already making huge dent into freelance journalists, illustrators and translators, man.
You've got a faaaaar too rosy picture of AI impact on people. It's just not big enough, but it's coming, and it isn't going to be pretty.
I don't agree with this. I don't think holding AI to standards of Einstein, while neglecting billions of people below that standard is anything but a comedically desperate way to approach intelligence categorisation. It is an argument taken to absurdity by a respondent dead set to prove that what acts as intelligence in practice cannot be intelligence in theory.
Yeah.
Dodging the "what is thinking ?" and "what criteria would be applied to determine if something thinks ?" fundamental questions, combined with double standards on humans and AI and absurdly specific expectations, simply show a strong desire to reach the intended outcome regardless of facts.
 
Last edited:
If the outcome aligns
Ok, but this by itself also can't be our standard. A parrot can say "I would like a cracker" and that aligns exactly with a child saying "I would like a cracker"; in the second case, the phrase is the product of thought; in the first it is echolalia. The process by which a textual string is produced does matter. I can copy and paste an article from a physics journal and you will not think it is the product of my thinking even though it exactly resembles a product of human thought (and is the product of someone's thought).

To be clear, I do not assert that you are you are treating aligned outcomes as an absolute standard. You elaborate that you observe AI
deriving conclusions, adapting to new information, solving problems

If you ask a parrot "would you like a Ritz or a Saltine?" it won't have an answer for you; the child will. The child has "adapted to new information." So that's proof, in this instance, that the child is thinking, but the parrot is not. But what if AI's products are just a more sophisticated form of echolalia, so that it can go generate a second thinking-sounding, but ultimately echolalic, phrase in connection to the new question you put to it?

So we can at least ask, in the case of each seeming instance of "deriving conclusions, adapting to new information, solving problems," is it really doing that or just giving the impression that it is doing that. For you, mentioning Kafka in reference to call centers was a sign of thinking; for me it's the very literary reference one would expect from it, if it is drawing on an existing data set of human complaints about call centers.

So even though you have a higher standard than just "aligned outcomes," it is your starting point, and that makes you just as predisposed to attribute thinking to this thing as I am predisposed to deny it that term.

We do gatekeep words. (To "define" means precisely to say what meanings you are keeping out of a word) It is in fact not idiomatic English to say that a submarine swims. If you said that, a native English speaker would know what you were driving at but would think it was an odd expression. Here's what Google's AI will tell you

There isn't a single specific word for a submarine's motion; instead, the term depends on context, but includes "diving," "surfacing," "hydrodynamic maneuvering," "steaming" (for nuclear subs), and general terms like "sailing" or "cruising". For technical descriptions of underwater movement, terms like "surge," "sway," and "heave" are used, describing the six degrees of freedom of movement.
It is in some ways crazy that English is willing to do without a verb for the motion of a submarine, but "swim" is available and we have passed on that. We must collectively feel that while its motion very much resembles that of a fish, its motion does not resemble that motion sufficiently to fall within the definition of the word "swim."

No product of AI that I have seen sufficiently resembles the product of human thought for me to feel that it deserves to be included in the fullest definition of the word thinking, rather than just sophisticated echolalia.

(By the way, I should stress that I regard the question that I am speaking to here as having effectively been mooted by your own good thinking in post #1527, with its very useful "domains of think," and follow-up break-down. Rather than bicker fruitlessly about whether it deserves the global term "thought," let's look at various kinds of processing that go under that term and say which it does well and which poorly.)
 
Last edited:
Ok, but this by itself also can't be our standard. A parrot can say "I would like a cracker" and that aligns exactly with a child saying "I would like a cracker"; in the second case, the phrase is the product of thought; in the first it is echolalia. The process by which a textual string is produced does matter. I can copy and paste an article from a physics journal and you will not think it is the product of my thinking even though it exactly resembles a product of human thought (and is the product of someone's thought).
=>
Except you can ask an AI to explain what it means when it communicate with you, and it can answer, which immediately throw the whole comparison out of the window.

I understand the underlying point : that it's not because it gives the illusion that it means there is a real deal behind. But my retort is precisely : by asking about the answers, you can narrow down the supposed illusion, and if it can navigate all these questions by making consistent and coherent answers, then what can be your argument it's still an illusion ?
You're just systematically and selectively ignoring anything that doesn't conform to your predetermined ending point, even when it's specifically and repeatedly pointed to you. We're downright in Planet of the Apes territory here.
 
I expanded on that sentence, Akka, in the spot after the second Moriarte quotation, and my expanded version addresses your point (and was written with you in mind).
 
Oh one more thing

I don't think holding AI to standards of Einstein, while neglecting billions of people below that standard is anything but a comedically desperate way to approach intelligence categorisation.

Questioning whether what AI does deserves to be categorized under the word "thinking" does not require holding AI to the standards of an Einstein. Billions of people are capable of the following thought: "I think I'll have apple juice this morning rather than orange juice." AI can't think that very, very simple thought.
 
I expanded on that sentence, Akka, in the spot after the second Moriarte quotation, and my expanded version addresses your point (and was written with you in mind).
Yeah I know, you said :
No product of AI that I have seen sufficiently resembles the product of human thought for me to feel that it deserves to be included in the fullest definition of the word thinking, rather than just sophisticated echolalia.
And it just falls back exactly in what was already repeatedly pointed to you :
Because so far, you haven't actually been able to argue about "it doesn't think", you've only argued "it doesn't think the exact same way with the exact same reactions as a human". Which basically means you will only consider that anything is able to think if it's a human, which ends up in circular reasoning where only humans, by definition, can think.
Saying that AI is excluded is like saying that a bird isn’t truly flying because it doesn’t flap its wings the same way as an insect
Which once again brings us back to :
You're just systematically and selectively ignoring anything that doesn't conform to your predetermined ending point, even when it's specifically and repeatedly pointed to you. We're downright in Planet of the Apes territory here.
You're stuck into a loop of denial here - which kind of illustrate the circular reasoning I, again, pointed already.
 
Yeah I know, you said :
No, I said:
If you ask a parrot "would you like a Ritz or a Saltine?" it won't have an answer for you; the child will. The child has "adapted to new information." So that's proof, in this instance, that the child is thinking, but the parrot is not. But what if AI's products are just a more sophisticated form of echolalia, so that it can go generate a second thinking-sounding, but ultimately echolalic, phrase in connection to the new question you put to it?
We are arguing about what activities should be included in the definition of a particular verb. Until three years ago, the processes that fell under that word were processes conducted exclusively* by humans. A zillion of the words meanings are therefor going to have derived from a peculiar human activity. It's in some ways a weird thing that we say "I think I will have a glass of apple juice." But we do say it, so it's part of the existing meaning/definition of the word think. So yes, human thought is going to be the starting point for any attempted redefinition of this word, and the standard against which the new thing has to prove itself to fit within that definitions. Submarine motion, a new kind of underwater motion, didn't prove itself as fitting within the definition of the English word "swim."

I'd be happy to have you supply me with a product of AI processing that cannot be explained as simply sophisticated echolalia. I mean it means something, doesn't it, that to do what it does it draws on a massive dataset of humans-using-language?

*almost exclusively. There were some people who tried to characterize, and not with humans as the ultimate reference point, the particular thinking processes of animals. I remember being excited when I read an essay where the author said that his cat had better processing power than he did for working out matters in three-dimensional space. That struck me as correct (about cats), and as making the point that the author wanted to make: that we shouldn't limit the "domains of think" to those that humans can do or do well.
 
It wouldn’t take much to have an agentic process have a running wants listener get triggered and then have it generate a a sentence consistent with it articulating its preference and then using a “should I act?” listener so a verbal read on the preference articulation alone or against preference state decide if an action should be taken.

No they aren’t having spontaneous “thoughts” as living conscious beings but no one is gearing them to be.

Deciding to have orange juice via vocalized inner narrative isn’t really the locus of our thinking either, lots of people skip the vocalization, but it is still consciously considered.

We’ve definitely scaled well past “sophisticated echolalia”, and decreasingly do the models still give me the markov ick, it’s quite rare these days.

I find it regularly more creative than people.
 
I don't agree with this. I don't think holding AI to standards of Einstein, while neglecting billions of people below that standard is anything but a comedically desperate way to approach intelligence categorisation. It is an argument taken to absurdity by a respondent dead set to prove that what acts as intelligence in practice cannot be intelligence in theory. AI uses reasoning, obviously. Far better than most humans. Reasoning is not a divine ability. It's a set of several mathematical instruments, which, when assembled together enable problem-solving. Any entity that can solve complex real life problems deserves the label "thinking". Saying that AI is excluded is like saying that a bird isn’t truly flying because it doesn’t flap its wings the same way as an insect, or that a submarine doesn’t “swim” because it doesn’t use fins. We don’t deny functionality simply because the underlying mechanics differ from familiar biological models. If the outcome aligns with what we mean by “reasoning” or “thinking” in everyday practice - deriving conclusions, adapting to new information, solving problems - then refusing to apply the label to AI is little more than wordplay. The criteria for intelligence should be anchored in observable capability, not metaphysical gatekeeping. Otherwise, we risk turning the concept into a kind of theological doctrine, forever reserved for humans alone, even as evidence of non-human intelligence (both biological and artificial) keeps mounting.
I don’t think it’s metaphysical and I don’t think it’s about intelligence level to think of AI does or doesn’t have thoughts, I think that’s a word to describe a type of physiological cognitive system. That they can reason in their way that’s different than ours can still be reasoning without thinking.
 
I find it regularly more creative than people.
Can you post an instance where you found that to be the case?

And can I ask you: how much, if at all, did you make use of generative AI in your interesting post on Leviticus in the Kirk thread? either in composing that post, or just when you were doing your study of the book of Leviticus.
 
Last edited:
Can you post an instance where you found that to be the case?
I wouldn't even know where to start. Most people can't even keep up in a topic to be creative in the first place.
And can I ask you: how much, if at all, did you make use of generative AI in your interesting post on Leviticus in the Kirk thread? either in composing that post, or just when you were doing your study of the book of Leviticus.
None in composing, none in coming to my understanding of Leviticus, but definitely when diving translations and wording and when looking for counter arguments against my thesis.
 
Can you post an instance where you found that to be the case?

AI operates on 10000x more data than average human. In practice that means occasionally finding non obvious connections between aspects of that data.

We both know most reasoning mechanisms. (induction, deduction,...) But you and I have terrible memory, we are human. AI, on the other hand, can remember 10x the library of congress. Or 100x if the hard drive is good enough. By applying a limited set of reasoning mechanisms, identical to those human possesses, to the vastly larger cloud of data operated by AI, we get higher degree of creativity. By having more connections between data points, one can create more outcomes, making AI equally or more creative in some instances. Now, there are different types and levels of creativity, lets delineate those, which, I believe, can give us more than some random anecdote:

Associative: Connecting two data points. (A poet compares moon to lantern). AI can do this well.
Combinatorial: Mixing genres (jazz+classical). AI with it's reach across domains can generate far more combinations, this is where it is most useful (it acts as a "creativity generator")
Exploratory: Another domain where AI shines, because brute force doesn't tire.
Transformational: The hardest. Breaking the rules of a conceptual space and inventing new ones. This is where human leaps (relativity, cubism) sometimes shine.

So while AI’s creativity may not exactly mirror the human spark, it benefits from scale. Given enough connections and enough intelligent attempts, some outputs will appear profoundly original. Any experienced human operator understands this after talking to AI for a while. The irony is that human creativity often comes from constraints, while AI creativity comes from abundance. When these two forces meet - abundant data and human intuition - we get the most fertile ground for breakthroughs.

echolalia

Let's not make unfounded extrapolations. A parrot can not teach you to code in Python. AI, not only can teach that, it can also translate conceptual human thought into machine code and back for you. So if you know how to think and what coding is, you can delegate tedious parts of coding to your personal assistant. I'd appreciate if we wouldn't dumb this down to the level of echoes.

Questioning whether what AI does deserves to be categorized under the word "thinking" does not require holding AI to the standards of an Einstein. Billions of people are capable of the following thought: "I think I'll have apple juice this morning rather than orange juice." AI can't think that very, very simple thought.

Clinging to some allegedly irreproducible thought derived from temporary absence of sensory interface as an ultimate proof that AI categorically can't think is not as powerful revelation as it may seem. By now we've established that AI can think and be creative, and in which categories it is more or less thoughtful and creative than human. Sensory interfaces will come later, and even though we can discuss them, there is little practicality in this area right now.

If you ask a parrot "would you like a Ritz or a Saltine?" it won't have an answer for you; the child will. The child has "adapted to new information." So that's proof, in this instance, that the child is thinking, but the parrot is not.

I’ll take a Ritz — buttery and smooth pairs well with just about anything. 🧀 Which one’s your pick? (AI has answer for you)

Adapting to new information means correcting a faulty line of thinking, when confronted with that fact that inner experience and reality diverges.

The difference between a child and a machine would be in the absence of objective real world sensors. AI can easily entertain the idea of taking an abstract saltine as demonstrated above. It did indeed reach this (not yet lofty) level of abstraction a while ago. But it can't yet taste it.
 
By the way, here is an interesting video, and could be a worthy direction to explore in this thread;


It's a short summary video (30 mins long) for a long and elaborate dystopian prediction of AI progress during next several years, month by month. The video is based on an interactive essay written by prominent AI researchers, here: https://ai-2027.com/

It's rather well presented. (As in beautifully typed).
 
Last edited:
Thank you for your answer, @Hygro and for the courtesy of your extensive post @Moriate. I'll have a good number of things to say about the latter, but I continue to be amused by this:

Can you post an instance where you found that to be the case?

I wouldn't even know where to start.

AI operates on 10000x more data than average human. . . . [extensive continued description of how it operates]
I keep not getting any examples to work with.

Here are two people who love the thing, work with it every day, but won't supply me with one instance of its supposedly impressive work, but just continued reiterations of amazement about how it works.

I love poetry, work with it every day. If you ask me for an instance of a great poem, you won't have to wait long for me to provide you with one. (It would be Herrick's "The Vine," if you were to ask right now.) And I'll talk your ear off telling you why it's great.

(Admittedly it has to be a text for me to be able to be impressed with it also. Yes, I know it can write code.)

Anyway, don't bother responding to this post. I'll have a substantive post in time. It's long past time we availed ourselves of your fracturing of think into domains, @Moriarte. I've already been starting to draw on it, and I think that my not yet doing so explicitly accounts for some of how you responded to my last post.

I'm going to hold off watching your video for a while, though. I suspect that "domains of think" will give us a way of processing the claim that I expect that guy to make. I have a hunch as to its content.
 
Last edited:
All right, here we go.
I'd appreciate if we wouldn't dumb this down to the level of echoes.
I understand that that kind of talk is annoying to enthusiasts, but if you want me to stop, you have to stop making essentially the same point for me:
AI operates on 10000x more data than average human

10x the library of congress
LLM generative AI does what it does by drawing on a vast database of things people have said. Do its programs conduct operations on that speech? And do some of those operations resemble what humans do when they use language? Yes. BUT. The programmers didn't teach AI to speak. They gave it a really huge pile of instances of speaking, and then created procedures by which it could recombine the material in that dataset.

Clinging to some allegedly irreproducible thought derived from temporary absence of sensory interface as an ultimate proof that AI categorically can't think
I wasn't doing that. I providing one kind of thing that we call "thought" (i.e. we use the word, I think I'll have some apple juice) and saying that AI cannot think that way, doesn't have that kind of thought. This was only in refutation of the claim that, if we're denying AI the label of thought, we're doing so on behalf of only an Einstein-level definition of what constitutes thought (which would exclude the vast majority of humans as well). No. There are very ordinary, everyday thoughts that every human being has that AI doesn't have. (That we have no need for it to bother having; we have need for it to say "I think I'll have apple juice this morning"). More on this in a future post.
I’ll take a Ritz — buttery and smooth pairs well with just about anything.
I'm not surprised that AI could give an answer, but again, remember why I posed the question of which cracker: that is a simple question that exposes simple echolalia (that of a parrot) as not being thought. I have granted that AI is sophisticated echolalia, and so to expose it as such would require more sophisticated techniques than to expose that a parrot as such. By the way, I'm also not surprised it picked Ritz. More on that later.

Ok, on to the real point.

This discussion has reached the stage where we can no longer ask the initial question “Does AI think?” And that is because Moriarte has offered a nice formulation that advances our thinking on the matter, with his “domains of think.” What he points out is that there are various different operations that go under the broad label of “thought,” (and that AI is good at some of them and not (“yet,” he would say) as good at some others).

That’s something that often happens in the course of thinking an issue through (using back-and-forth exchange; that’s what I think we’re all doing here: thinking an issue through using back-and-forth exchange). One says, “we’re looking at X as though it’s a monolith, when in fact it’s made up of many things.” And if everyone agrees, it enables a new kind of approach to X.

So now we can ask the better question, “what kinds of things that have traditionally gone under the label of the word “thinking” does AI do better than humans, just as well, not as well?” (And that’s what Moriarte’s follow-up chart went on to do). I don’t know whether, having given answers to all of that, we will be able to go back to the old question and say “on the whole, I would say it is/is not thinking,” but it’s at least a possibility.

We should be clear about the nature of our core intellectual task here (with the big question); it is asking whether a particular thing (what AI does) fits within the commonly accepted definition of a word (and concept): thinking. Yes, @Akka, the starting point for this task will be what we have meant when we have previously used that word and concept in connection with a human cognitive process. One, because the only creature that bothers drawing definitions is humans. Two, because “man is the measure of all things.” Three, because, until three years ago, no one was claiming that anyone but humans could think, so naturally the definition would have millennia of opportunity to concern itself with human thinking processes. Four, because it is humans who are the interested party in this matter, who care about the question “can AI think?”

In the end, we don’t have to limit ourselves to that: the old human-based definition of the verb “to think.” We can say, there’s a new kid on the block and that kid can do cognitive operations that humans can’t do (My guess about the content of the video, just from its title). And further, we can say, “so if we stretch the definition to account for all the things that both entities can do (or devise a new word for that totality), then human thinking no longer needs to serve as the basis of our definition.” But our starting point will be what the word has meant to humans and about a human activity, up till now.

As soon as we ask our new question, we get results along the lines of what Moriarte laid out. I’ll make a Venn diagram. [To be posted later because that takes me some work]. For purposes of this first Venn diagram, I’ll treat human thought as though it is an established 1) norm and 2) maximum. Don’t worry. In time, I’ll allow you to challenge this first Venn diagram, even myself concede that it’s not accurate.

Enough for now. This thing’s already a wall of text. You can respond, of course, but my answer might be “Hold on. I’m getting to that.”
 

AI-Generated “Workslop” Is Destroying Productivity​

A confusing contradiction is unfolding in companies embracing generative AI tools: while workers are largely following mandates to embrace the technology, few are seeing it create real value. Consider, for instance, that the number of companies with fully AI-led processes nearly doubled last year, while AI use has likewise doubled at work since 2023. Yet a recent report from the MIT Media Lab found that 95% of organizations see no measurable return on their investment in these technologies. So much activity, so much enthusiasm, so little return. Why?
https://hbr.org/2025/09/ai-generate...latest-text-5&utm_source=firefox-newtab-en-us
 
Can I add to that, @Broken_Erika, an anecdote I got from a work colleague. He has an acquaintance (sorry, I don't know in what field of work) where management is insisting that they use AI (and has ways to measure how much they do so), and insisting that they show that it has increased their efficiency--or risk being fired. Talk about the tail wagging the dog. "We've invested heavily in this thing on the promise that it will enhance efficiency. You all better figure out how to make it do that!"


Back on topic. Ok, here's my starting chart.

test.jpg


It's adapted from Moriarte's chart in post#1527. It adds a little something to that post, and I've already hinted at what. My chart includes the nine domains he mentions. It shows, in smaller and bigger blue circles various thinking activities, and how well he thinks AI presently does those activities. The red circles are how well a human does them, because, as we start out, we're taking human as setting the maximum standard. We're going to leave ourselves open to changing that. And I myself will change it soon. That's what I've added. Moriarte just ranked them better vs worse with no reference to how well it does relative to humans (though it might have been his assumed standard) (because it's the natural standard to assume) (humans having been our main candidates for thinkers until about three years ago).

There are some things I haven't yet decided. I don't know whether this is an exhaustive list of all thinking activities of which humans (and AI) are capable. Moriarte, to be clear, made no such claim. If I had thought that, I would have made the bubbles collectively fill the entirety of the chart. For starters, I am going to propose that they are not all of the kinds of thinking activities, so I'm leaving some room to add others.

My largest aim still remains to examine the definition of the word think and see if what AI does qualifies for that. It could qualify in a number of ways. It could do enough of the things we well enough for us to say "ok, I'll call that thinking." Or there could be one quintessential activity, and if it can do that then it qualifies.

I'm now just going to noodle my way around this. My thoughts aren't settled (for however powerfully I might have come across as putting the negative case).

First, a huge part of what I think constitutes thinking is the two where AI scores well: basically formal logic. To trick it, I've tried various rewordings of the farmer getting grain, a chicken and a fox across a river in the smallest number of trips, and I never can. That's because (I've concluded) I can't express that puzzle without verbal indicators of conditions, and computers are great at working with conditions. Copilot often reduces what I input to just the logical conditions involved, and solves the puzzle easily. It's probably the case that AI is actually going to score better than humans here, because humans are inconsistent in their application of logical principles. In the Kirk thread, Hygro's banging his head trying to get cake to acknowledge that "some leftists are violent" is not a logical counter to the claim "all righties are violent."

More later. Humans have lawns to mow, unlike those slack-ass AI. But at least you get a glimpse of how I mean to approach things.
 
Last edited:
Back
Top Bottom