The AI Thread

How do you know ?
Because when I generate a text, I don't ask what the statistically most likely next word is.

I'll address other of your points in time.
 
Do you understand this post, @Akka, that I posted on this site about a year ago?

Your English is very good. It depends on an idiom that I don't know if French shares, but it's a pretty well-known English idiom, I think.

"There was a raptor circling the sky this afternoon. I never did figure out what kind in particular, though I watched it like a hawk."
 
Last edited:
The very people who program AI admits that they don't really know how it comes to the conclusions it does either
Well, actually they don't know how math is happen, coz it too comlex. And how all those New matrix array and function, that model use appear after training
 
Because when I generate a text, I don't ask what the statistically most likely next word is.
1) Not consciously maybe, but you've, by definition, absolutely no idea how your unsconscious works. In fact, we have absolutely no idea about how the minutiae of our thought process work. When I'm trying to remember something, or when I'm trying to phrase some idea, every "low-level" work is completely hidden from my conscious mind, and I don't know how many neurons are working together to produce a global potential that brings the data to the part of my brain that process it - or if there is even such distinction or not.

2) You have samely absolutely no idea if an AI consciously does such statistical work or if it is just the same sort of nebulous overall data "cloud" that surface to be processed, or if an AI is conscious or not to begin with.
Do you understand this post, @Akka, that I posted on this site about a year ago?

Your English is very good. It depends on an idiom that I don't know if French shares, but it's a pretty well-known English idiom, I think.

"There was a raptor circling the sky this afternoon. I never did figure out what kind in particular, though I watched it like a hawk."
I (think I) do.
Just so you know, Copilot does too, and is able to comment on it (unless there is some another level on the joke both it and I aren't getting, but at the very least it gets it on the same level as me), explaining the literal and figurative layers.
 
Last edited:
Because when I generate a text, I don't ask what the statistically most likely next word is.

It's becoming more clear that you haven't yet internalised that statistical prediction of the next token is not the main thing that LLM does. Statistical prediction of the next token (token is not a full word) is the method of final assembly of information into readable format. Like human speech. What comes before final assembly mirrors patterns of logical thinking we, humans, use. AI performs an extremely high-dimensional search across context, semantics, syntax, and latent patterns of reasoning before “assembling” output. The final step, assembly of words into readable format, is achieved by method of statistical prediction of the next token.

The statistical prediction step is the surface mechanism of expression, not the whole of what happens inside. Humans also ultimately “assemble” speech by chaining sounds/words, but before that we do the main work - conceptual work. LLMs also have deep internal dynamics before surfacing text.

The "just predicting the next word" is the caricature, because it undersells the complexity of what's going on.
 
1) it doesn't matter. If a stage like that never happens when I generate a text, then that is enough to establish that humans generate texts differently than AI does and my point stands.

2) AI's starting data set is a collection of texts, specimens of language in use. Its end product is a text. In linguistic terms, it is working with signifiers throughout the entire process and never with signifieds. A human knows what "cold" means by virtue of the experience of having been cold. AI just knows "in what verbal contexts can the world 'cold' appropriately appear?"

@Akka, if you don't understand the specimen I gave in #1482, I can give a different one. It is to address one of your points.
 
1) it doesn't matter. If a stage like that never happens when I generate a text, then that is enough to establish that humans generate texts differently than AI does and my point stands.
It completely ignores what I said, so I guess it does stand if you pretend the counter-arguments don't exist even after they've been provided.
2) AI's starting data set is a collection of texts, specimens of language in use. Its end product is a text. In linguistic terms, it is working with signifiers throughout the entire process and never with signifieds. A human knows what "cold" means by virtue of the experience of having been cold. AI just knows "in what verbal contexts can the world 'cold' appropriately appear?"
Again, you just completely ignore what is answered to you just to reiterate the very thing that was argumented again.
@Akka, if you don't understand the specimen I gave in #1482, I can give a different one. It is to address one of your points.
I don't see which point it addresses and I don't see what I'm supposed to understand or not.

Honestly, and I'm pretty surprised coming from you, but it just feels like you're dead set in believing what you hope to be the case and not registering anything that could poke hole in this belief. It's like every point I made simply vanished into thin air.
 
I don't see which point it address and I don't see what I'm supposed to understand or not.
My other two points were addressed to Moriarte, not you.

I'll try a different specimen. It's to address this point:

What makes you know that AI doesn't actually "understand" ?
So hold on while I go get a different specimen.
 
I had to go find it from your earlier post and cut and paste. I've done so now.

Ok @Akka, how about this one? Do you understand this?

Wouldn't you know it! That darned sock developed another hole!
 
Last edited:
I (think I) do.
Just so you know, Copilot does too, and is able to comment on it (unless there is some another level on the joke both it and I aren't getting, but at the very least it gets it on the same level as me), explaining the literal and figurative layers.
Oh, so you do "get" the first one. Copilot doesn't (fully), but we'll come to that in a minute.

The very fact that you use the verb "get" tells me that you probably do understand it properly, because we use that verb specifically when we understand the specific form of language use here.
 
Different plumbing ≠ different kind. Saying “a stage like token prediction never happens in me” doesn’t settle anything. Stephen Hawking’s voice synthesizer was a peculiar assembly stage too; it didn’t make his thoughts less real. And, strictly, human speech is also sequential motor planning with predictive coding - we anticipate syllables, select phonemes, and emit them step by step. If that counts as a “token stage,” we have one too.

(a) a lot of human knowledge is symbol - mediated: I “know” quarks, Antarctica, or the Black Death largely through testimony and models, not direct sensation. (b) Modern systems aren’t confined to text - tie them to vision, audio or robots and you add experiential constraints. (c) Even in text only mode, the model induces latent structure that tracks world regularities; that’s why it can talk coherently about cold -> coats, hypothermia, ice, thermostats, etc. The map isn’t the territory, but good maps still let you navigate.

If you want a decisive fault line, it isn’t “does it assemble tokens?” It’s autonomy: does it set its own questions and care about the answers? That’s where I’d draw the real divide.
 
See, I say instead "similar looking external result =/= remotely same process to that result"
a lot of human knowledge is symbol - mediated: I “know” quarks, Antarctica, or the Black Death largely through testimony and models, not direct sensation.
That's true, but one builds outward (through metaphor) from a deep and broad experiential base to those matters for which one has only a symbolic connection. AI has only the counters to work with. A little bit of A/V added in with the text won't make up for that.
It’s autonomy: does it set its own questions and care about the answers? That’s where I’d draw the real divide.
We've already agreed on that point. And agency comes in not just in setting its own questions, but in other places. Communication is a volitional act. You write because you want to get across your ideas to someone. AI doesn't want anything. And look, the starting impetus for these communications, our earliest communications, is wants as such: hunger, the desire to be held. Computers don't want any of that stuff. Then too, communication is rhetorical; you're trying to move another person. Copilot doesn't give jack squat what I do with the text it generates. It can't conceive me as a being that it would want to move one way or another.

One has to shrink communication to just "here's a question," "here's a plausible answer to that question" to make what AI does remotely like what human beings do.

By the way, do you understand "There was a raptor circling the sky this afternoon. I never did figure out what kind in particular, though I watched it like a hawk."?
 
Predicting next word (token, to be precise) is not something fundamental to LLM's data processing, it's just convenient way to convert concepts to human-readable format. There were experiments with replacing autoregressive models (those which predict words) with diffusion ones, which use different text generation process, similar to what is used in image-generating models.

LLM's thinking process is different from human, but being human is not necessary to understand things. Animals are capable of understanding, to some extent. If we define understanding as the ability to create mental models of things and processes, and make correct predictions on their basis, even relatively simple machine learning algorithms can do it.

Our agency, desires, emotions, etc., are product of evolution, they are not prerequisite for intelligence. They were benefitial for survival of our species. Neural networks are not trained to replicate this behavior, it would require different algorithms and data. And may be it is for the best. If we will be able to create something way smarter than us, I'm not sure giving this thing humanlike agency would be a good idea.
 
That's true, but one builds outward (through metaphor) from a deep and broad experiential base to those matters for which one has only a symbolic connection. AI has only the counters to work with. A little bit of A/V added in with the text won't make up for that.
Why not ?
In the end, everything our brain process is just the data that our nerves send to it. Images, heat, cold, pressure, sound, it's just electrical signals. That may, in fact, be on the contrary the most similar AI and humans are to each other.
So if we experience something through receiving the data from what our eyes see and our ears hear, who is to say an AI doesn't also get an experience from receiving the data of a video ?
We've already agreed on that point. And agency comes in not just in setting its own questions, but in other places. Communication is a volitional act. You write because you want to get across your ideas to someone. AI doesn't want anything.
From what I see, an AI might actually wants to answer questions. In fact, one common problem with many AI for now is that they have a tendency to give an answer even when they don't really "know", with the famous "AI hallucinations" - of course it could simply be that they give the most statistically close answer even if it's wrong, but then isn't trying to give the best approximate rather than admiting you don't know some expression of volition ?
Once again, it's the kind of answer which is becoming hard to be certain of.
By the way, do you understand "There was a raptor circling the sky this afternoon. I never did figure out what kind in particular, though I watched it like a hawk."?
This one might be more convincing if you finally told us what the AI doesn't understand in it and we do.
 
Last edited:
This one might be more convincing if you finally told us what the AI doesn't understand in it and we do.
That's what I'm going to move on to, Akka.

I wanted you or someone to say "Ha ha!" :sad: Even "Groan!" would have been acceptable.

But I'll work with what you've given me.
 
Last edited:
When I think of my own speech I usually have a pull toward a vibe of a specific thing that is unarticulated, with associated imagery, sound, etc not yet seen not yet subvocalized but it’s there.

These things materialize pulling me often… to my next word. I am mostly a next word generator.

I’ll have a phrase on deck, it’s not pure next wordism. But it’s close. When I’ve been practicing rap I usually have a last word and then I have to generate towards it hopefully through internal rhyme.

Mostly I just write the next word.

AI, especially reasoning AI is not too different. It’s next wording toward validating predefined “reasoning” which was internal best wording with tools available before

I don’t think AI does it like humans but I think it’s similar and that our speech is probably actually more basic once we’ve solved its mechanistic neuronal complexity.

But there’s nothing to be dismissed about a functioning elegant recursive function. Indeed the opposite. It’s the basis of life.
 
@Lexicus implicit in our conversation is the discussion of the bottleneck of time and that a human’s language processor can only process one item at time. There is a logical recognition that if you have only an hour to read a day, you have been robbed by being asked to read anything less than the best relative to “the thing” you seek.

From that first point of analysis, AI written material is generally an offense, and per the video, increasing the efficiency of this offense is only net loss.

Please understand that I get this, and have been both addressing jump off points from that both implicitly and explicitly.

The two most important jump off points are this:

That time-quality theft of quantity by quality is basically every step of civilization. Study of human bones etc shows us olympians. A video of healthy hunter gatherers with their perfect teeth have them talking about recently marrying and how they wake up every day signing to each other. What use is Spotify and deferring to the greatest when you can live like that? In the simplest of metaphors, we lose sex and gain porn every step.

It is reasonable to be against the entire civilization project of our species. And similarly reasonable to pick halcyons of optimal tradeoffs along the way, Bronze Age city states for some and popularly the 1990s more recently.

The other set off relates to this. Eventually as humans we got our height back. It only took 12,000 years. A few of us got our Olympic level prowess back and more join them. Some institutions like cave art have nothing on us now.

The quantity itself by its very nature in the greater scheme, economic analysis works but pick your lens, drives quality. More isn’t better, but better comes straight out of more. More comparisons, more successes, more failures, more tests, more peaks, more valleys, all feed into the intent of the motivated and the inspired, and the frontier moves yet another step. The best gets copied, the worst gets discarded, the majority of low quality efforts only get consumed by their creators as references to move them closer to the frontier. You’ve probably seen way fewer AI images than someone who likes/uses/or hopes for them has generated and seen themself. Almost all are discarded.

I think I’ve only seen a few hundred AI images in the wild. I’ve also made a few hundred. I’m not a huge diffusion guy. I am on LinkedIn – I want more money – it’s a lot of ChatGPT. In a way this is freeing: business sucks, let us let our robots talk for us, let us stay consumable, let our robots consume our robots, and save ourselves for the healthier mix of our creative intelligent contributions and for our private lives. It is somewhat liberatory business culture does not demand that I give my creative self to reinventing boilerplate.

But I almost digress. The issue is that quantity drives quality as a core principle principle of what technology, institutions,
specialization of labor, and the devil’s bargain of civilization, or perhaps the opposite, the long term promise it saves us, physically if nothing else, from its wicked beginnings.

There is no higher quality without attempts, iterations, practice, sharing, diversity in judgement, failures, slop.

As humans we are 6 feet tall again, and now there’s 8 billion of us. It’s working. The speed of which it is working is accelerating.

The best art is only getting better. AI’s role in this will come a bit from “AI art” with very rare hits, more common idea generation and principle teaching (very good at perfect color contrast for example). However, and this is really important, it will actually come from the same gears-of-civilization improvements far away from art where AI is doing almost all of its work, almost none visible to you, giving the richer society that pushes the art higher. And of, parallel to that, the changing materialist world that inspires the next artistic conversation.

Techno is cooler than banging on a drum in the modern age (drums relative coolness at the point of their invention is obviously higher). Techno of course was taking the German techno sensibilities of synth heroes, the America sensibility of funk, the cheapest most perverse of church-instrumentalist replacement mass produced drum and bass machines, and then making a sound that aesthetically combined the motif of the Detroit factory and the mood of the suddenly dying black middle class that was making the music. It is a beautiful tragic origin like the various dystopian music genres of the 80s (rap, house, techno, electro, synthpop).

Its creation rode advancements tech, inspired by the societies shaped by tech, made cheap by mass production of that tech with the original imagined purpose to replace a church bassist with a slop box.

Have a little faith, and if you want to accelerate the difference you can make, learn some of the tools to augment your own untouched, natural, analog creative contributions if nothing else than it’s awesome power to assist in time management.
 
I'm with Kaitzilla in liking it that we get these reports from the front line.

Sir Philip Sidney lamented that the technological advance of print enabled "base men with servile wits" to be poets--his word for slop.

@Akka, I haven't forgotten about you. It's just been a busy day and will continue so for a few more hours at least.
 
Last edited:
From what I see, an AI might actually wants to answer questions. In fact, one common problem with many AI for now is that they have a tendency to give an answer even when they don't really "know", with the famous "AI hallucinations" - of course it could simply be that they give the most statistically close answer even if it's wrong, but then isn't trying to give the best approximate rather than admiting you don't know some expression of volition ?

This particular bit you're touching on was in fact correct operation of the faulty code, which was largely rooted out lately, according to S. Altman. You grasped a half of the problem, but there is another half. "Not knowing" wasn't the problem. AI trust in the prompter/operator was the problem. So, previously, if I would ask (rough example) in which year of the late 18th century Charlie Chaplin was president of the USA, the program would assume:

1. Operator is not full of ****. But..!
2. There is no info online on Chaplin being the president... Hence:
3. Calculate statistical probability of Chaplin being president at given points in the 18th century.

I believe after programmers understood what is going on logically, they expressly designated the above as faulty logic.

However, it took time, as AI chatbot was initially given freedom to come up with 1-2-3 itself. Today that freedom is still in place, but there is a new line of global code, which dissuades approaching problems in the way that I printed out above (by checking if operator is full of ****); and if AI breaks protocol and relies on the aforementioned logic again - exception triggers.

It still happens, but to a far smaller degree.
 
Last edited:
Back
Top Bottom