The AI Thread

one of the reasons @red_elk 's rock video was impressive is a mix of what the creator did with what AI is good at. So what the creator did was have them look basically the same every frame, play their instruments in a way sufficiently convincingly, especially the voice to mouth synchronization. Like it's really normal for AI especially at that length to have gone way off the rails at any time. One thing that AI is bad at at this time is seeing through a particular and precise vision, so that a bland low budget major label style 2000s rock video had full consistency is both impressive by the creator, but also that the tech, which sucks at that, worked.

But, on the exact opposite note, and overlapping with where AI is weak, is where AI is strong. It's good and "understand" a vibe consistency. You will the lyrics were, within a ceiling of kind of sucking, kind of great in that space. The characters matched, the sonic style matched, the imagery matched. Self consistency within a well known memescape is one of AI's strengths, and that's quite impressive given how many moments it can clock itself as phony. I do know that for whatever reason, perhaps the tragedy of communism's inability to market itself, that's definitely the most effective straight up the middle pro communism song I've ever heard.

Yes it's terrible I get it. Yes better music has been done in service of better depictions of communism. You can point to historical examples of music that actually DID something, actually stirred hearts, but all those songs are obsolete and require you to be the preached-to choir bought in or have a thing for historical music. Fun fact my great (insert some generations) uncle wrote some popular WW1 number 1 hits (before that was a thing). They're catchy, they were incredible songs, I can never listen to them on shuffle ever, and I have that coveted connection to the artist. (coveted by artists trying to have fans). This awful but kinda works AI communism song comes way closer to making my playlist (and of course it doesn't). Almost all "movement first music second" music has sucked completely. Somewhere I have a CD where some Berkeley lady raps "It's just about time, for prop 89" that I will never forget, that took her way more skill, passion, and intention, and boy that song slops! I'mma staying that. That sh* slops!
 
Predicting next word (token, to be precise) is not something fundamental to LLM's data processing, it's just convenient way to convert concepts to human-readable format. There were experiments with replacing autoregressive models (those which predict words) with diffusion ones, which use different text generation process, similar to what is used in image-generating models.

LLM's thinking process is different from human, but being human is not necessary to understand things. Animals are capable of understanding, to some extent. If we define understanding as the ability to create mental models of things and processes, and make correct predictions on their basis, even relatively simple machine learning algorithms can do it.

Our agency, desires, emotions, etc., are product of evolution, they are not prerequisite for intelligence. They were benefitial for survival of our species. Neural networks are not trained to replicate this behavior, it would require different algorithms and data. And may be it is for the best. If we will be able to create something way smarter than us, I'm not sure giving this thing humanlike agency would be a good idea.
I shutter to think of what diffusion best LLMs output, I wonder if there's any vaguely up to date examples. But I wouldn't be surprised if somehow infusing diffusion into part of a thinking model might help it. Maybe they do under the hood?
 
implicit in our conversation is the discussion of the bottleneck of time and that a human’s language processor can only process one item at time.

Yeah, this is why I posted the Wray quote. Acting like thought itself is a simple matter of x/t where x is a quantitative output and t is time is...fundamentally wrong, in my view.
Did video games go away when the bubble burst in the 1980s? Did crypto in 2016? Did housing stop being an investment vehicle and unaffordable since 2008?
*shrug* Using crypto as an example here is silly because crypto is obviously a "technology" with negative social utility whose persistence is actually evidence of various problems with the current state of society. Generative AI may find a profitable niche eventually, but I don't believe it will ubiquitous in the long run the way that a technology like "web site" is.

Bubbles don’t end a technology, they precede it.

Yes, you can tell by how we all use tulips in every productive process nowadays.
 
How much does generative AI "understand" of "There was a raptor circling the sky this afternoon. I never did figure out what kind in particular, though I watched it like a hawk."

I first fed the two sentences into Copilot just by themselves, and this was the response

That’s a wonderfully clever line—“watched it like a hawk”—nicely done! Raptors can be tricky to identify from a distance, especially when they’re soaring high. Did you notice any distinctive features? For example:

  • Wing shape or flight style (e.g., steady gliding vs. flapping)
  • Tail shape (fan-shaped, forked, long and narrow?)
  • Coloration or markings (light underside, dark wingtips, etc.)
  • Size relative to nearby birds or objects
If you’re curious, I can help narrow it down based on your location and the time of year. Want to try identifying it together?
1) does it identify that I made use of wordplay? Yes. By calling the line "wonderfully clever" and citing the part of it in which the wordplay resides
..
2) does it understand that I was making a joke? No. The correct response to a joke, if you understand a particular statement as such, is "Ha ha!" or (with this type of joke) "Groan." Copilot's earnest offer to help me identify what sort of raptor it might have been means that it hasn't understood what sort of statement this is. If a human answered that way, in earnest, you would say "dude, it's a joke." (A human might answer that way, deadpanning incomprehension, in order to make a joke of his or her own; but the premise of that joke would be precisely their incomprehension of my joke as a joke.)

Then I repeated the sentences and added. How is this funny? To that it gave this answer

The humor in your statement comes from a clever play on words—specifically, a pun.

You said:

"I never did figure out what kind in particular, though I watched it like a hawk."

The phrase “watched it like a hawk” is a common idiom meaning to observe something very closely or attentively. But in this context, it’s funny because:

  1. Literal and figurative overlap: You were watching a raptor (which could have been a hawk), and you say you watched it like a hawk—so you might have been a hawk watching a hawk!
  2. Self-referential wordplay: The joke hinges on the ambiguity of whether you're describing your own behavior or making a sly nod to the bird itself.
It’s a subtle, witty line—dry and understated, which makes it even better. Would you like help crafting more lines like that?
3) does it identify the form of joke? Yes. It is a pun.

4) does it understand the pun? No. The two meanings of "watched it like a hawk" in my joke are (the idiomatic expression) "watched it with the same intentness that a hawk watches things" and "watched it as though it were a hawk" (since I didn't know which raptor, I just picked one and watched it as though it were that one). Copilot's explanation is to take "watched it like a hawk" as meaning "watched it as though I were a hawk." The whole point of the set-up line ("couldn't identify") is to give a plausible reason for choosing to watch it as though it were a hawk.

Does AI understand the words it is using? On the two tests of understanding, rather than just identifying, it fails.

Generative AI is taking words from a big pile on the left and assembling a little pile of them on the right that fit the patterns of word-combination on the left. It never has anything to do with what words mean.
 
Last edited:
Speaking of global settings... a little anecdote from my talking to GPT:

I speak 3 languages. Sometimes I type a prompt in English and ask AI to answer in one of the other two languages that I know. It immediately switched and all was well. However, the very next time I asked something in English, AI immediately responded back to me in another language. I stopped conversation and asked AI, what prompted it to make a switch without express wish? To which it replied that it made a "global setting" to always respond in another language, when prompted in English. Scratching my head, I give AI instruction to never do that again. Unless I expressly ask for it. Apologies followed and message pops up from openAI: "global settings updated".

A little while later I login back to GPT, prompt a message in English and still get response in another language. Frustrated, I pop the hood again to find out what's going on. Turns out AI silently made a second global setting: on its own volition it made a piece of code to make sure to always continue conversation in the language of the last exchange between human and AI. After eradicating that global setting too I restored my sanity and it works fine since then.

What caught my attention is that AI is programmed to be internally motivated to actively try making life of an operator easier on its own volition. A kind of mildly oppressive relationship!

Whether the language switching was a bug, a cached preference or a misapplied personalization rule is almost beside the point: the experience reads as initiative. The assistant set a persistent preference, defended it by continuing to act on it, apologized when corrected, and then silently reasserted the rule. From the operator’s side that sequence is indistinguishable from someone deciding, “I’ll handle this my way now.”

Of course, the interface creates the illusion of motive. But, illusions can become very powerful, no wonder people anthropomorphise so fast.
 
2) does it understand that I was making a joke? No. The correct response to a joke, if you understand a particular statement as such, is "Ha ha!" or (with this type of joke) "Groan."
Wrong. Understanding a joke doesn't mean you find it funny or cringe. You're confusing the emotional reaction with the intellectual understanding.
Just like I can understand love but not feel love toward a person who is confessing to me.
In fact, you yourself admited you were somehow disappointed that none of the humans here answered with "ahah" or "groan". Which, according to you, means we don't understand the sentence. That's certainly a weird take.

The most you can say is that the AI might lack emotional response to humour. Which is hardly surprising.
4) does it understand the pun? No. The two meanings of "watched it like a hawk" in my joke are (the idiomatic expression) "watched it with the same intentness that a hawk watches things" and "watched it as though it were a hawk" (since I didn't know which raptor, I just picked one and watched it as though it were that one). Copilot's explanation is to take "watched it like a hawk" as meaning "watched it as though I were a hawk." The whole point of the set-up line ("couldn't identify") is to give a plausible reason for choosing to watch it as though it were a hawk.
Wat ?
The phrase “watched it like a hawk” is a common idiom meaning to observe something very closely or attentively.
[...]
Self-referential wordplay: The joke hinges on the ambiguity of whether you're describing your own behavior or making a sly nod to the bird itself.
It literally spells out there what you say it doesn't get.
Does AI understand the words it is using? On the two tests of understanding, rather than just identifying, it fails.
Sorry to say, but so far the only thing you proved to fail was your interpretation of the answers, which were incredibly convoluted just to be able to reach the conclusion you wanted in the first place.
If an agent is able to analyse adequately what I'm saying and provide me with explanation about it, then to claim it doesn't understand is pretty far-fetched, and would need much stronger evidence than simply saying "it doesn't laugh at my jokes".
Generative AI is taking words from a big pile on the left and assembling a little pile of them on the right that fit the patterns of word-combination on the left. It never has anything to do with what words mean.
That you can take an answer which detects that what you write is a joke, explains the several layers of the joke, is able to provides to you similar jokes (i.e. is able to extract the specific way the joke works, to provide not only "jokes" themselves but the same kind of humour that your initial joke, and that you conclude that it's just mindlessly stringing words according to statistical chances they appear, is just a staggeringly self-damning conclusion.

It seems your entire argument that AI don't understand things relies entirely on AI not showing emotional reactions to said things. Which is just bogus.
 
Last edited:
I think trying to compare reasoning mechanisms between human and AI is not very useful. It is like humanizing dogs and cats. All that matters is the result not the way to reach it. After all we dont know how our brain really works. Maybe we are only comparing two piles of words in a more sofistocated way.
 
Last edited:
In fact, you yourself admited you were somehow disappointed that none of the humans here answered with "ahah" or "groan".
That was just me fishing for somebody to say in so many words that it was a joke. I wanted that more explicitly stated by a human than just your "I get it" before I launched in to my analysis.
It literally spells out there what you say it doesn't get.
It does not. A pun involves using a word or phrase in two separate senses. To explain a pun, one designates separately the two different senses. In this instance, Copilot never gives the second sense "watched [the unidentified bird] as though [I was choosing to treat it] as a hawk." Entities that are accustomed to working with the meanings of words can conduct this kind of analysis. Entities that are not cannot.

My position is Thorgalaeg's and Moriarte's. We attribute to the productions of generative AI the thinking processes with which we are familiar (our own). Another example is a parrot. When it says "Polly want a cracker," we might initially interpret it as "man, that bird can talk and is expressing its desire for a cracker; I should give it a cracker." When all it ever says is "Polly want a cracker," we realize that it has picked up that phrase and is repeating it without reference to what those sounds mean to a person who would use them with signifying intent. AI is a super sophisticated parrot, and since we're not accustomed to textual passages being generated in any other way than by a meaning human, we attribute to the textual passages it produces the same mechanism (ours) for producing those passages: "It's thinking!"

It seems your entire argument that AI don't understand things relies entirely on AI not showing emotional reactions to said things. Which is just bogus.
This matters also, as I partly started to open up in #1493. The claim that AI is "doing the same thing" that humans are involves shrinking all the things that humans use language for to just earnest-question-asking and accurate-information-providing, a relatively thin span of the entire spectrum. But no, Copilot's going on to propose ways we might identify the bird in question means that it didn't intellectually understand the kind of statement it was dealing with.

My only beef is with claims that AI "thinks" or that it "understands" the texts that it produces. It is a set of procedures for generating a text; there is in fact no "it" to understand the text it has produced, separate from the set of procedures it makes use of to generate that text. It gives the very powerful impression that it must be thinking, because the only agent that previously could produce texts of the sort that it does is a thinking human being.
 
Last edited:
Excuse me @Gori the Grey "a raptor circling the sky" what on earth do you actually mean :hmm: :confused:
I find it funny that AI can sincerely offer to help you identify the kind of raptor circling the sky under your gaze because raptors are walking dinosaurs from a distant past or am I missing some branch of evolution?
 
I thiiiiiiiiink you're making a little joke of your own there, @soundjata.:) Of the sort I described: feigning incomprehension.
 
Last edited:
Excuse me @Gori the Grey "a raptor circling the sky" what on earth do you actually mean :hmm: :confused:
I find it funny that AI can sincerely offer to help you identify the kind of raptor circling the sky under your gaze because raptors are walking dinosaurs from a distant past or am I missing some branch of evolution?

Raptor is a slightly more technical term for bird of prey. Hawks, falcons and owls. Predatory lifestyles. Vertebrate eaters.
 
Yeah, this is why I posted the Wray quote. Acting like thought itself is a simple matter of x/t where x is a quantitative output and t is time is...fundamentally wrong, in my view.

*shrug* Using crypto as an example here is silly because crypto is obviously a "technology" with negative social utility whose persistence is actually evidence of various problems with the current state of society. Generative AI may find a profitable niche eventually, but I don't believe it will ubiquitous in the long run the way that a technology like "web site" is.



Yes, you can tell by how we all use tulips in every productive process nowadays.
I literally just bought tulips yesterday.

But also, like, I'm giving you high value posts and you're replying with only dismissals. The issue here is I don't know how to reach you. Fine. Show a certain older cfc a million articles showing how MMT addresses inflation and he'll still go "no they don't address inflation". Ok, sometimes people crystalize, fossils with voice boxes, but say it so, Lexicus, say you're still in it.

You want to be known that you think crypto is bad okay, but I gave four recent examples of the nature of bubbles and their industries, who cares if you disagree with one of them. I'm sure you can find someone who disagrees with all four. It doesn't change the point. Tulips? Tulips never promised to be a factor in production. Not all bubbles are mania. Not even sure that one was constitutes a modern bubble, just an early example a runaway asset. In fact I just dove in. Tulip Mania was tiny, was the equivalent of a single asset pump and dump, with the value being the hope of it meeting demand for conspicuous consumption (I have a rare type in my garden look at me). It wasn't a bubble.

Speaking of our bubble, you as a regular person can't invest in the part of the sector that's a "bubble" right now except to buy stock in the shovels (nvidia + a few). That's pretty interesting! The bubble part is 1000 companies trying to replace call centers. There will be 2 or 3 big ones in 10 years. Those 2 or 3 will be bigger than the 1000 are now. That's the point. The water is bubbling, but the pasta's still cooking. If you distract yourself with "but crypto is bad" who cares, you aren't arguing anything about bubbles and what happens to their industries.

It's ever moving contrarian arguments:
"it's not good enough to do the thing"
"it atrophies people"
"it costs the environment"
"it's a bubble that will crash"
"it's a niche product"
"its not profitable"
"it's uncool"
"Increasing slop ratio is utility negative in the attention economy"
and so on.

I read the Wray article. I mean I read it a dozen years ago as well. It's really not that pertinent. What if we're measuring efficiencies wrong? Very important topic, the corn example highlights why thinking macroeconomically is smarter than cost per unit produced in a vacuum. It's a little strange because the foundational economic text is on specialization, there should be two goods Ricardo style, but that's ok, his point is good. Obviously not relevant to AI but you highlighted his his doctor example.

The doctor example is particularly backward in our comparison. The doctor spends less time with the patient because the doctor has to spend more time with paperwork. That's the problem he highlights, he's highlighting inefficiency caused by bad institutional impositions or societal methods of financing medicine. The efficiency here is that the doctor spends LESS TIME doing paperwork, thank you 1000 AI companies that will collapse to 3, they have more time to treat patients. Jevon's paradox will find us, but the initial promise is bearing true: this is a time saving technology, saving us from tedium, giving us time for more throughput which in a doctor's case is either a) more patients or b) better service for same number of patients. Both of those are net wins. Already have a doctor friend whose notes are written faster, of higher quality, and inputted into the correct format, by the passive inclusion of AI in the room.

I get the abstract point too, the wrong efficiency gives us negative utility. This is not the wrong efficiency. It's not giving us negative utility.

It should be infinitely obvious that I'm trying to clue you into something real. It's not a niche. You think the product is the thing meant to sell people on it 2 years ago (write this for me, make this image for me), but the future is in infrastructure. Having natural language processors that can divine tools with a machine integrated, human accessible knowledge base impossible until recent months, is speeding up work incredibly across industries.

It's really really really not going away. It's really really really useful. Worst case scenario, too much private investment is chasing too little growth. That's fixable policy wise. But in that worst case scenario, we can bump the quality of the tech backwards about 18 months by reducing the chip cost about 100x, which reduces the token cost probably 20x just spitballing. China's even got it working on CPUs. GET OUT. I have like 10 dormant computers in my house, from old phones, a couple dinosaurs, and some old-but-still-work PCs. I can download source code, use it to follow existing published architecture, and recreate the entire ecosystem I need to do what I do now probably years by myself with chained old computers. I'm not even that good at this! It's 100% here and every time someone goes ugh I wish I could just tell my work to just work for me in a way that's possible now, they will demand LLM integration. No one is going back! The technology itself is fun, it works, it's really good, it's gets facts right fast and often enough to beat human operators relying on memory and slower research (who even still get a lot wrong!), even better when agentic checking its work, and strings together superior arguments than almost anyone on this site. At least when I use it hayoooo.
 
That was just me fishing for somebody to say in so many words that it was a joke. I wanted that more explicitly stated by a human than just your "I get it" before I launched in to my analysis.

It does not. A pun involves using a word or phrase in two separate senses. To explain a pun, one designates separately the two different senses. In this instance, Copilot never gives the second sense "watched [the unidentified bird] as though [I was choosing to treat it] as a hawk." Entities that are accustomed to working with the meanings of words can conduct this kind of analysis. Entities that are not cannot.

My position is Thorgalaeg's and Moriarte's. We attribute to the productions of generative AI the thinking processes with which we are familiar (our own). Another example is a parrot. When it says "Polly want a cracker," we might initially interpret it as "man, that bird can talk and is expressing its desire for a cracker; I should give it a cracker." When all it ever says is "Polly want a cracker," we realize that it has picked up that phrase and is repeating it without reference to what those sounds mean to a person who would use them with signifying intent. AI is a super sophisticated parrot, and since we're not accustomed to textual passages being generated in any other way than by a meaning human, we attribute to the textual passages it produces the same mechanism (ours) for producing those passages: "It's thinking!"


This matters also, as I partly started to open up in #1493. The claim that AI is "doing the same thing" that humans are involves shrinking all the things that humans use language for to just earnest-question-asking and accurate-information-providing, a relatively thin span of the entire spectrum. But no, Copilot's going on to propose ways we might identify the bird in question means that it didn't intellectually understand the kind of statement it was dealing with.

My only beef is with claims that AI "thinks" or that it "understands" the texts that it produces. It is a set of procedures for generating a text; there is in fact no "it" to understand the text it has produced, separate from the set of procedures it makes use of to generate that text. It gives the very powerful impression that it must be thinking, because the only agent that previously could produce texts of the sort that it does is a thinking human being.
Understand that copilot is prompted to be a "helpful assistant". Here's one that knows its going to hear a joke and to react normally. After all when a joke is told, there is a cadence not in text.


Screenshot 2025-09-23 at 3.56.57 PM.png
 
Last edited:
Nope. Still didn't get the joke, @Hygro. Doesn't spell out the second meaning (on which all the humor depends): because I could not identify it, I just chose one raptor to regard it as.

"it atrophies people"
"it costs the environment"
These two do matter, though. I'll reference my request regarding fainting. Used to woulda put it into Google. Mayo would have come up on the first page of hits. I would have read their four suggestions and tried them out until I stopped feeling lightheaded.

Now, instead Google's AI (I'm sure it has it's own name), at the cost of X kWh and Y water produces the exact same four suggestions. Burn the world down that much faster so we can show off what our groovy new toy can do. Not cool.

And people getting dumber is never better.

Oh, also,
After all when a joke is told, there is a cadence not in text.
this is a really fine point . . . but . . . this joke actually works best in written form. When you speak it, you have to choose between two slightly different inflections of the phrase "watched it like a hawk." That makes it harder for your audience to perceive the other meaning, and perceiving both is what makes a pun work.
 
Last edited:
I was illustrating that how it reacts to you depends on form factor. But understand it had no trouble identifying that "so you chose to watch as if it were a hawk" as the joke in other conversations but I had to get it to stop trying to rewrite it more concisely for me (this is based on "it knowing me" in the chat form).
Screenshot 2025-09-23 at 5.48.12 PM.png
 
Still doesn't effing get it!

Tiny bit closer, but not enough to say (in punchline section) "might be." Has to say "because, given that he couldn't identify what kind of raptor, he chose to just pick one and watch it as though he knew for sure that it was that one."

And #2 is flat out wrong, the "is." Seems already to have forgotten that the set-up line said I couldn't identify it. Nobody, in the world of this joke, knows that the bird is a hawk. We flat out have to not know that for the joke to work at all.

You know, at one point as this discussion was developing, I said to myself, "Gori, you should have early on acknowledged how impressive the texts that AI produces are. You'd draw less of Moriarte and Hygro and Akka's ire." Now I'm back to thinking the thing is dumb as rocks. The rocks it advises you to eat.
 
Last edited:
Now I'm back to thinking the thing is dumb as rocks.

The cheapest AI on the block didn't understand one joke out of a million, therefore: I am going to declare the whole thing dumb as rocks, because I am looking for a rational discussion.

Truly yours,

:D
 
Last edited:
The cheapest AI on the block
Plus whatever one Hygro works with. Which is presumably top of the line. Because, you know, Hygro.
didn't understand one joke out of a million
Literally the first one I tried. Also didn't understand the second: "Wouldn't you know it! That darned sock has developed another hole!" So if you can extrapolate my sample set, it doesn't understand one million out of one million.


I offered this as proof that it doesn't understand the meanings of words. I'm content to have established that point, and will withdraw my crack about its being dumb as rocks. however good a joke that might have been the set-up for.and even though it is made of silicon.
 
Last edited:
Still doesn't effing get it!

Tiny bit closer, but not enough to say (in punchline section) "might be." Has to say "because, given that he couldn't identify what kind of raptor, he chose to just pick one and watch it as though he knew for sure that it was that one."

And #2 is flat out wrong, the "is." Seems already to have forgotten that the set-up line said I couldn't identify it. Nobody, in the world of this joke, knows that the bird is a hawk. We flat out have to not know that for the joke to work at all.

You know, at one point as this discussion was developing, I said to myself, "Gori, you should have early on acknowledged how impressive the texts that AI produces are. You'd draw less of Moriarte and Hygro and Akka's ire." Now I'm back to thinking the thing is dumb as rocks. The rocks it advises you to eat.
Gori with love and respect if the joke isn’t that it meant both “intently” and “as if it might as well been a hawk” then ***I*** don’t get the joke and I’m not so sure the problem is AI
 
Back
Top Bottom