The AI Thread

You keep saying “just predicting the next word” as if it were a coin flip.
I don't mean to come across as disparaging. It's not remotely like a coin flip (that's not predicting). Drawing on zillions of treatments of the same idea, its predictions are very accurate. It's the large data set that makes this so. That data set is, however, inescapably, what humans have already thought and written down. It can't go beyond that.

Yes, but every generation meets a tool that threatens to shortcut thinking - the printing press, the calculator, the internet. The danger isn’t the tool, it’s whether teachers can shift the challenge so students must go beyond what the tool hands them.
You're not wrong. The challenge is primarily to teachers. There are math teachers who think the calculator is a plus, because they focus on the reasoning process of mathematics, rather than calculation, which follows pretty routinely once you've set the problem up properly. (i.e. thought mathematically).
 
There's fallacy in this: AI is only as smart as the human, which supervises it. Better off thinking about AI not as entity separate to human, but rather brain extension.

I always note that AI is also garbage in garbage out.

If you just set it loose and let it gather information it'll be like Tay Tweets. Complete disaster due to trolls sabotaging it.

If you set parameters it won't necessarily be accurate. It'll just reflect the bias of the programmers.

It's best at combing through a large volume of data quickly. Like asking "What was the largest margin of victory in a Week 1 NFL game since the merger?" A human would need to comb through 55 years of results and keep a running tab. AI can come back with the answer almost instantly.

Even then it can still be wrong. I once asked for a 16 team fantasy draft, snake format, baseball Hall of Fame players only. It stopped after three rounds and drafted players not in Cooperstown.
 
It won't do anything until you ask it a question. The absolute biggest thing that is lacking from AI is initiative.

Neither does your brain, until something pokes it: a sound, a sight, a stray thought. Stimuli trigger you the same way a prompt triggers AI. If you’d never watched Kirk’s clip, you wouldn’t have “initiated” that thought either.

What I think you’re really describing isn’t initiative, but autonomy. And if that’s the bar, then say it plainly: you want a machine that generates its own questions. But don’t confuse the absence of that feature with the absence of thinking in what it does when you do ask.

I don't mean to come across as disparaging. It's not remotely like a coin flip (that's not predicting). Drawing on zillions of treatments of the same idea, its predictions are very accurate. It's the large data set that makes this so. That data set is, however, inescapably, what humans have already thought and written down. It can't go beyond that.

But that’s the irony: humans don’t go beyond what’s come before either. Every “new” thought we produce is scaffolded on culture, memory, language - the whole archive of human experience. If “recycling the already thought” disqualifies AI from thinking, it disqualifies us too.
 

you want a machine that generates its own questions
I won't think a thing is thinking until it generates its own questions.
 
Every “new” thought we produce is scaffolded on culture, memory, language - the whole archive of human experience
I know, but we put the stuff together in fresh combinations, not just draw on tons of it to reproduce what tons of it would say.

Its goal is to say "what's the most likely thing that everyone who has ever discussed this before would say about it?"

A human's goal in thinking is "what's something that nobody who has ever thought about this idea has ever thought before?"
 
Last edited:
I know, but we put the stuff together in fresh combinations, not just draw on tons of it to reproduce what tons of it would say.

But “fresh combinations” is exactly what the model does - it doesn’t just parrot one source, it splices patterns from thousands into something no single author ever wrote. The fact that it feels familiar is the same reason your own thoughts feel intelligible. Novelty isn’t starting from zero; it’s recombination with a twist.
 
See the second half of my post. We may have cross-posted. The difference is put more sharply there.

To repeat and rephrase that point: No, AI doesn't want to put a "twist" on anything. It wants to give you the closest approximation of an average (if words were susceptible of being averaged) of what everyone who has previously discussed the topic has already said.

It's almost as far from thinking as anything is possible to be.

It's like deliberately seeking out conventional wisdom.
 
Last edited:
It isn’t averaging down like blending every opinion into beige soup. What’s happening is more like trajectory-following: the model takes your prompt, locates it in a vast landscape of past patterns, and then extends the line forward in a way that fits.

Yes, that’s reactive - it doesn’t set its own trajectory - but it’s also not “just an average.” It can pull in obscure threads or unlikely analogies if the context steers it there. So the limitation for the time being isn’t in the mechanism, it’s in the absence of self-directed questioning.
 
What’s happening is more like trajectory-following: the model takes your prompt, locates it in a vast landscape of past patterns, and then extends the line forward in a way that fits.
In a way that "fits" the largest number of things that have already been said on a particular topic. It's like a "line of best fit" among those. It is effectively a beige soup. It doesn't move anything forward.

It can pull in obscure threads or unlikely analogies if the context steers it there.

Do you have available a case of this happening?
 
Last edited:
Do you have available a case of this happening?

Anecdotal? Sure. Happens all the time. I once asked an AI why people describe bureaucracy as “Kafkaesque.” Instead of just repeating “because of The Trial,” it drew a line to modern tech support call centers - endless loops of scripted replies, where you never reach a human. That leap from 20th century literature to a 21st century consumer experience wasn’t in its training set as a neat package. It stitched that analogy on the fly because the context pulled it there.
 
That leap from 20th century literature to a 21st century consumer experience wasn’t in its training set

Of course it was. It's 21c people calling call centers Kafkaesque. There must be millions of such posts on the web. It made no "leap" between those things that wasn't in millions of 21c writers calling their experience with call centers Kafkaesque. It was they, all of those online complainers, who made the connection between a 21c experience and a 20c piece of literature.

If AI called experiences with call centers anything other than Kafkaesque, I'd perk up a little. Because that's humans' go-to (literary) description of them.

But this is good. Hang on for a second.
 
Last edited:
Fair point - it’s drawing on a metaphor humans invented. But that doesn’t make its use trivial; most of what we call thinking is recycling cultural shorthand. You might say, “Yes, but humans sometimes mint new metaphors,” and that’s true - though rarely. Most people never coin a fresh turn of phrase in their lives, and yet we don’t deny them the dignity of thought. So if the bar is “must invent a brand-new metaphor,” then thinking itself becomes an elite rarity, not a fair standard.
 
You jumped in too soon. I, in my elitism, was going to forge a different literary reference to call centers. Then teach you the piece of lit, so that you could see my connection as valid and as saying something different about call centers than Kafkaesque does.

Here:

most of what we call thinking is recycling cultural shorthand
I'd want to shift to "much." (Maybe I can go with you with most; I'll have to give it some thought).

I think the average Joe does tons of little authentic problem-solving every day that makes their thinking perfectly dignified.

Next I'll be on the look out for such a case.
 
Problem with current AI - we almost reached maximum (physical) level with current Electronic tech. With new neoromorphuc computing tech, AI will move to New level. Now its just a big (and really good and beautiful, and give a lot of opportunitys) calculator
 
There will always be retrogrades/conservatives protesting the use of new technology.
When the technology can easily make us obsolete and potentially could lead to a singularity (i.e. a very real case of existential menace), casting as "retrograde" people who simply aren't completely oblivious to danger isn't the best parallel I can think of.
I can put the substance of my reading into fresh combinations. AI can just predict what word is likely to come next. That isn't thinking. It's patching together little previous instances of thought.
That's just absurdly off. AI can be asked a question and it can then :
1) reformulate it, while keeping the meaning,
2) actually gives a relevant answer for it.
That's not some simplistic "let's use this word, then let's consult which word will statistically comes after". Reformulating and answering require understanding a point, a concept, and that doesn't happen just because you write back word by word depending on how they are statistically likely to appear one after another.

This is exactly the kind of wishful thinking I was sighing about in my previous post. The dream that somehow we have some otherworldly essence in our own processing power that makes it fundamentally different than something else which does happen to nevertheless produce the same sorts of results. A bit like the scientists in the early previous century who insisted that animals didn't actually think nor felt but were just having automated reactions.
Problem with current AI - we almost reached maximum (physical) level with current Electronic tech. With new neoromorphuc computing tech, AI will move to New level. Now its just a big (and really good and beautiful, and give a lot of opportunitys) calculator
Unlike our brains, which are totally not just big and very complex calculators processing the different stimuli our body provides ?
 
Last edited:
1) reformulate it, while keeping the meaning,
It can do that. When I asked it my question about Kirk's bad-faith treatment of Ms Rachel's citation of "love your neighbor," it correctly indicated that I was interested in "rhetorical or ideological" reasons for why a person might distort someone's quotation, even though my question had not used either of those words. Its adding those two things made my question more precise than my own formulation of it. (One possible RL answer is Kirk hates Ms Rachel because she rebuffed his romantic advances; but it knows to rule out that kind of answer because I generalized: "a right-wing commentator" and because of other aspects of my phrasing.)
2) actually gives a relevant answer for it.
It can do that. The answer it gave to both my initial question and my follow-up were relevant. As I said, it came down harder on Kirk's bad-faith argumentation than I myself did!
That's not some simplistic "let's use this word, then let's consult which word will statistically comes after".
It does both (1) and (2) precisely by asking, "in treatments of my topic within my dataset, which word is statistically most likely to follow the previous word?"

It depends what you mean by "simplistic." Being able to do that on a huge data set is not something a human would find easy to do at all; but the process (of statistically calculating probabilities) couldn't be more simple.
Reformulating and answering require understanding a point,
They do not, and AI does not "understand" the texts that it produces. Because it mimics human the products of human thought so well, we attribute to it the only capacity from which such products had previously been known to emerge.

Our minds do work differently from generative AI.

Let's take my request for advice about what to do if feeling faint. There are millions of such treatments on the web. I type in "what should I do if feeling faint"

It will likely pick as its first word "If." The logic in medical situations is "If/then" logic: If [symptom X], then [treatment Y]. It will likely pick as it's second word "you." are. feeling. faint. It has choices. If. you. feel. faint. Where it comes down on those little choices will be a function of where the majority of treatments in its dataset come down.

If I had typed "what to do if feeling faint?" it might have opted for "one" or "someone" as its second word. (Because people are sometimes looking up medical advice not for themselves but for someone else.) Probably most pre-AI-composed treatments of the topic start "If someone" for this reason. So "someone" is likely to have the higher probability unless I use the word "I" in my question.

I don't know about "otherworldly," but the human mind does go about its tasks in a way that is "fundamentally different" from how generative AI does.
 
A mildly interesting link. :coffee:


I guess it was like this when graphing calculators were invented, just to a much bigger degree.

The thinking revolution! :crazyeye:
 
Do you really think people dependent on something will let it disappear?

Did video games go away when the bubble burst in the 1980s? Did crypto in 2016? Did housing stop being an investment vehicle and unaffordable since 2008?

Did the internet go away in 2001?

This technology is here to stay. You’re stuck on “writing” and I’m trying to disabuse you of it but you can’t help but circle back. I really do mean wake up. It’s already huge, it’s already here, it’s permanent. No one is going back. Over our dead bodies we aren’t going back. 90% of AI companies will fail because it’s going to get too big for them to keep up. The bubble might pop, the tech stays.

is it sad that we are feeble chair dwellers because of technology? Very. You have the time freedom and tools to be Olympian, like our ancestors were by default. The tech is here to stay. It’s going in one direction only.

And it already encompasses so much more than

The economy is tens of trillions of dollars. Baristas are getting $45,000 a year instead of $800 for doing the same work because technology exists in society, the growth is in tech and tech today is in AI. Little metal shapes you take for granted, little changes in your browsers ability to render css you take for granted, a lot of diesel.

People have no idea how big this stuff already is. No idea how deep the infrastructure goes and how quickly it’s built. You don’t see it outside, barista has you tapping a square space little machine and gets a bigger tip thanks to smaller options, but the baristas job is otherwise unchanged. Espresso is espresso. Spend 30 minutes in a coffee shop and sure people are on computers but the more the world changes the more it stays the same right?

But under the hood it’s all different, and we’re way richer than 30 years ago it’s not even funny.

the use cases for LLMs is so great it’s insane. Pure unintegrated genAI is an AI dead end and we reached its pure form height in 2022 with gpt4. But its role inside other applications is incredible and anyone using it to good affect isn’t going to return to the bad old days.

Bubbles don’t end a technology, they precede it. Technology as an economic institution has increasing returns to scale, so this one should be a really big bubble leading to an even bigger maturity leading to the next tech.

But say it popped tomorrow. Big austerity high interest rates and taxes on investors the works. OpenAI dead. Anthopic dead. All dead.

Well get wrecked naysayers we got everything we need open source, every company is going to find a way, every individual like me is going to harness extra computers and build agents with free tools.

I can run deep seek on my 2016 phone sitting around if I have to, I’m not going back. Corporate America is not going back. Government soon is not going back.

Luddites, fuddy duddies, anti tech hipsters, angry old geezers, too cool for school old people who aged out of tech excitement can mix their points up as convenient (not good enough/ it’s a bubble/ people are going to atrophy/ it uses too much energy/ slop) but at the end of the day it’s just getting started and it’s already—been established—revolutionary.

Thanks for reporting from the front lines Hygro.
:salute:

Obviously this is a big deal from all the smart people throwing around thousands of billions of dollars of real money.
But for us folks watching the news, we have no idea what's going on usually.
 
I liked this quote:

The AI says it’s “thinking” (even though everyone knows that an AI doesn’t actually think, they’re blatantly lying to our faces.
 
It does both (1) and (2) precisely by asking, "in treatments of my topic within my dataset, which word is statistically most likely to follow the previous word?"
Well, I'd say if you can infer meaning and offer reasoning by "using statistically which words follow another", then it might simply be that "intelligence" works like that. After all, WE infer meaning by how people put words one after another, and we know what words mean because they are used repeatedly in certain contexts.
They do not, and AI does not "understand" the texts that it produces. Because it mimics human the products of human thought so well, we attribute to it the only capacity from which such products had previously been known to emerge.
Let me quote myself :
A bit like the scientists in the early previous century who insisted that animals didn't actually think nor felt but were just having automated reactions.

What makes you know that AI doesn't actually "understand" ?
You claim it doesn't and it only "mimicries", and I ask : at which point does the "mimicry" becomes the real deal ? Conversely, before which point is it only mimicry and not the real deal ?
Our minds do work differently from generative AI.
How do you know ? Have you been in the "mind" of an AI to see the difference ?
I mean, nobody know how our own minds actually work. The very people who program AI admits that they don't really know how it comes to the conclusions it does either. As I pointed in my previous posts, our brains are also, fundamentally, a big computer that receive stimuli provided by sensors, store data in its memory, and process all that in its own self-contained bubble.

And anyway, let's say you're right and our minds work fundamentally differently. How does that alone proves that AI aren't able to "understand" ? That it's different doesn't mean it's less capable (if anything, it's becoming more capable in a fast-increasing array of subjects).
 
Back
Top Bottom