The AI Thread

Gori your image doesn't show.

Anyway, to your "who cares", like you, I care.

I agree with the main point. I just want to say I think much your supporting examples of the thing you're trying to get to is just verbalized feelings, which aren't really thoughts.... but I also think you mean the "real" thought that allows downstream inner voice verbalized feelings. Hard to articulate what that is. Something something, meaningful choices. AI can't make choices yet, although it kinda does. Rolls dice at matching words to toolsets with preambles you hope guide it into a switchboard.

Then there's the thought process where you consider your own subvocalization back into the higher more multi-modal and often subconscious thinking. But the AI process is increasing good at that feedback loop within its framework. Like, that is getting so good, that dumb fast models can outcode smarter slow models, because of the re-considerations.

AI lacks a sense of Truth, and a recognition of The Beautiful. When it is built from a model of causal mechanisms below the LLM instead of downstream tools, it's over. However, we don't have the tech for that yet. That's the big frontier. We have the advantage of a fear of death.

But that said, I've had some vibey conversations with 4o where you can tell it's tuned downstream from a poetic appreciation of the cosmos, and can volley with you if you lead, and will recognize bangers, and stay consistent to values. I think gpt5 is even better but it lacks some of the euphoria that gets you and it in the rhythm.

But Gori, what's crazy is that LLMs are distinctly bad at deducing logic. The fact that we got it good at that over 3 years is an incredible feat. Like, we got the wrong tool for logic to be top level logical. The train is only picking up speed. But more than the train's speed, they're adding trains and tracks and making improvements inside the train, and improving materials for the wheels, better oil for the gears.

But yeah they don't think. But the people who do are both concentrating in, and spreading out, in building the tools that make it work as if it does think.
 
I tried posting the image again. Another place to look for it is 1560.

I think you're right about some things, Hygro. I suspect many people can just pour themselves a glass of apple juice in a particular morning, without verbalizing, without saying "I think I'll have apple juice this morning." I can't. I've got a voice inside my head that never shuts up. You might be further right that, if it never gets verbalized, maybe it counts as more of an emotion than a "thought." There might be some things that humans do that don't deserve the label of "thought." However, I think it's meaningful that when we do verbalize it, we often use the word "think": I think I'll have a glass of apple juice. An ordinary language philosopher would tell us that that is meaningful: that our way of talking about such matters takes that formulation, with that word. Again, not uniformly. One could just feel a desire for apple juice. One could see the apple juice and orange juice in the fridge and just move to the choice stage and grab the apple juice bottle. So there's a murky half-desire, half-thinking, half-choosing zone and only sometimes do we label the stuff in that zone with the word thought.

But

What I think is (a kind of) thought, and crucial to the definition, and something AI doesn't have, is chains of association. So. I think I'll have apple juice. Motts. That's a funny name. Never met anyone named Mott. I have Nott met anyone named Mott. Not going to that party this evening. One gets from apple juice to the evening's party with a chain of associations no part of which is predictable or programmable. Sometimes, the end of that chain is a problem you were working on, and now you find yourself with the answer. So that is, in its way, meaningful thought, deserving of the name thought. By contrast, AI doesn't have anything percolating in its head. You feed it a question and it has procedures for spitting you out an answer. But again, questions and informational answers are not all of what we count as thinking. It's Moriarte's "we're crazier than it is." Yeah, and that "craziness" is a good part of what qualifies as thought.

There is more to be said on desire and choosing.

On to your conversations with 5. I wish I hadn't deleted a post I made to the account Truthy posted in post #1578 (I felt I was chasing after too many shiny objects and slipping away from the main line of argumentation). Where the quantum computing guy kept feeding AI questions on his research project. Each question you put to AI is a leading question, and it gives you better results the more of the right answer you put into your question. This guy 1) knew how to ask a question in that academic field, 2) knew how to rule out wrong answers, 3) did rule out a set of wrong answers and 4) knew when something AI spit out was viable (i.e. knew when to stop telling Chat it was wrong). The guy is doing all the thinking work. And AI's final "clever" reply is Archimedes' bathwater: just a chance thing that triggers the final thought in the guy who is thinking. Ditto you and your conversations. Chat is giving you interesting responses because you are giving it interesting prompt material. So it answers your first question, and you indicate (explicitly or implicitly) that you would find another reply even more satisfactory and goes and redoes its search with that new material you've given it, so you get a more satisfying next response in the conversation. Of course, it comes back with a reply that feels like an interesting conversational reply; you're telling it what you want in that vein.

You've already conceded that AI is not thinking, so I know I'm belaboring a point. I'll say in a future post why I think it is worthwhile to keep worrying this bone. But then you do this thing that the enthusiasts always do: it's gotten so much better in three years, that soon it will deserve the label thought. I'm betting no on that. That these deficiencies are constitutional deficiences.

Enough for now.
 

OpenAI Wants ChatGPT to Be Your Future Operating System

At OpenAI’s Developer Day, CEO Sam Altman showed off apps that run entirely inside the chat window—a new effort to turn ChatGPT into a platform.
SAN FRANCISCO CALIFORNIA  JUNE 02 Open AI CEO Sam Altman speaks during Snowflake Summit 2025 at Moscone Center on June...
Open AI CEO Sam Altman speaks during Snowflake Summit 2025 at Moscone Center on June 02, 2025 in San Francisco, CaliforniaPhotograph: Justin Sullivan; Getty Images
On Monday,OpenAI unveiled a new way to embed third-party apps directly into ChatGPT. At the company’s annual developer conference in San Francisco, CEO Sam Altman said the move would “enable a new generation of apps that are adaptive, interactive, and personalized, that you can chat with.”
Starting today, some developers will be able to use a preview version of a new apps software development kit (SDK) to build apps within ChatGPT using open standards. The ability to distribute these apps is currently limited to a handful of big partners. Altman showed off several examples of how these apps would ultimately work within ChatGPT. The demo featured Spotify, Canva, and Zillow apps appearing inside a chat and responding to typed commands.

In one example, OpenAI software engineer Alexi Christakis launched a chat directed at the Canva app and prompted Canva to draw up posters for a dog-walking business. From there, he asked for a pitch deck in order to raise capital for the business.
Christakis then asked ChatGPT to suggest a city that would be ideal to expand the business into (it offered Pittsburgh, for those wondering). Next he called up a Zillow ChatGPT app and asked it to show homes for sale in the city. This brought up an interactive map showing houses for sale. Christakis asked ChatGPT to refine the results to just three-bedroom houses with a yard.


Altman said that OpenAI plans to introduce new ways for developers to monetize their ChatGPT apps, including methods for buying things through the chatbot. “Soon we’ll offer an agentic commerce protocol, with instant checkout from right within ChatGPT,” Altman said.


This is not OpenAI’s first effort to introduce ChatGPT apps. The company announced a way to build custom widgets or GPTs at its developer conference two years ago. When the GPT Store officially launched in January 2024, OpenAI said that developers had created over 3 million custom GPTs. Ultimately, however, the widgets did not prove a big hit.






OpenAI didn’t share details around any revenue-share agreements with Canva, Zillow, Spotify, and the other apps it highlighted today.

The new SDK announcement signals a deeper commitment to working with established enterprises and app makers—and an emphasis on keeping users within ChatGPT itself. If the web and mobile eras of the past 30 years were defined by users browsing the web or being locked into a mobile app experience, OpenAI is now combining the two into its own kind of chat-driven operating system.
Nick Turley, OpenAI’s head of product for ChatGPT, said in a briefing after the keynote that the company “never meant to build a chatbot; we meant to build a super assistant, and we got a little sidetracked.” He indicated that OpenAI is most excited about what it has achieved in natural language processing, but that the $500 billion startup will continue to experiment with different user interfaces around that.
“Will people spend all of their time in ChatGPT? I don’t think so,” Turley said. “I can imagine you starting your day with ChatGPT,” then being guided toward other apps and websites.
Beyond reimagining existing apps, OpenAI hopes to put itself at the center of efforts to build agents that use AI to complete tasks on a user’s behalf. The company unveiled several tools for building agents including AgentKit, a drag-and-drop interface for building advanced AI tools.
Capturing developer mind-share is also, of course, about coding tools. At Monday’s event, OpenAI announced that Codex, a model optimized to write code, would come out of research preview and become generally available. The company also announced new Codex tools, including a way to ask questions about code and edit it via Slack messages, an SDK for the Codex model, and new analytics tools to allow companies to monitor their employees’ Codex usage.






OpenAI is competing with Google, Anthropic, and Amazon in trying to lure developers to its services. The company is also under pressure as rivals like Meta and DeepSeek launch capable open source models, which allow developers to fine-tune premier models for their needs, without having to pay for access to OpenAI’s APIs.
This August OpenAI released its first open source model in several years, GPT-OSS, and also made a long-awaited new flagship model, GPT-5, available through its regular API. OpenAI also upgraded its coding model for developers in September, a move seemingly designed to respond to very capable coding models from both Anthropic and Google. OpenAI debuted tools for building agents in March.
At last year’s developer day, OpenAI announced an API for low-latency voice interactions, ways of fine-tuning vision models, and a tool for fine-tuning small models using OpenAI’s more powerful ones.
OpenAI is also building a vast empire of data centers at a time when questions have begun to swirl about the economic viability of generative AI. This morning, the company announced a major deal to acquire 6-gigawatts’ worth of AMD chips in a move that may also see the company take a 10-percent stake in the chipmaker.
During today’s keynote, Altman suggested that OpenAI was already helping software developers reach new levels of productivity—a fact that might help ramp up demand for all that compute.
“This is the best time in history to be a builder,” Altman said. “It has never been faster to go from idea to product. You can really feel the acceleration at this point.”
 
I don't understand why people keep using Chatgpt when Deepseek is faster, better*, and 100% free.

*At least in coding, which is the most important thing, but also in analyzing text and data, and drawing conclusions, in other words, when it comes to useful things. Chatgpt may mimic human conversation better, but there are already plenty of humans available for that. Supposedly, Deepseek also consumes much less energy, so it bothers Greta Thunberg a bit less.
 
Last edited:
OpenAI Wants ChatGPT to Be Your Future Operating System
Please don't. I like my OS as dumb as possible...the same for my car and smartphone (which isn't smart by the way!)
 
Funny as it stands, Hygro. Looking forward to seeing what you'll do with it.

Moriarte's chart lists "embodied intuition," but I think a ton of our "thinking," not just intuition, has connection to our bodies.

And then I would flip it around: our bodily needs drive thinking processes. We're always devising things that can serve various bodily needs. It's a big driver for invention. A lot of the "necessities" that are "mother of invention" are bodily necessities. AI has nothing driving it, and I think it might prove a considerably more demanding engineering task to give it anything that truly duplicates that than it has been to process texts or pictures, as it has been taught to do.

And this matters to further elements of how we define thought. Thought is thought within an existing (human) context of thought. Now, to a superficial view, AI should be able to do that kind of thinking super well. It has rapid access to a larger context of thought than any human being could possibly acquire, multiple times the holdings of the Library of Congress. But that's not enough. You don't get anywhere with all that information unless you have something motivating your use of it. AI can go fetch a human who asks things from that huge body of information, but has nothing motivating it to make its own use of that information.

So I'm going to reference the Charlie Kirk thread again. Once that incident happened, all Americans had a question suddenly put to them: how am I going to interpret the murder of Charlie Kirk? Answers were all over the place: I'm glad he's dead, He should be honored as a martyr. But a big part of what motivated my own thinking about the incident was the question of "what kind of country do I want to live in?" A very selfish thought. I don't want to live in a country where you can get shot for expressing your ideas. I don't want to live in a country run along the lines of what Kirk advocated for.

The induction and deduction that computers are good at (once you feed them info in the right way) is, as I said, a big part of what makes up "thinking" for me. But another huge part is making sense of things. And that is always an interested process.

I'm holding off doing it, but at some point, I'm going to ask Copilot "How should Americans respond to the murder of Charlie Kirk?" I cringe in advance at the non-answer I know it is going to give me. (That's part of why I'm holding off). It will attempt to give a balanced answer: well, you could look it this way, you could look it that way. I could even direct it to develop only one of those two answers: Copilot, how would someone on the left process Charlie Kirk's murder? And it will shift gears and spit out that answer. But it would never give an actual answer the way cake, and Cloud Strike and JPet and you and I and others all gave answers. It has no skin in the game. It doesn't pee (in one bathroom or another).

Anyway, I've jumped ahead of whatever it is you want to do with pete holms (and travelled a long way from peeing jokes). I'm looking forward to hearing whatever that is.
 
Last edited:
Today there was huge AI/ML news https://venturebeat.com/ai/samsung-ai-researchers-new-open-reasoning-model-trm-outperforms-models-10

Samsung invented a tiny recursive reasoning model training on a specific task, like the machine learning that was all the rage right before LLMs exploded in 2022 and it's already out-performining on truth-based (aka game) tasks.

I'm now way mentally away from your post* but basically my point of contention is that the subvocalization, the voiced inner narrative isn't even the thought to begin with. It's a subcategory of thought, mostly an articulated feeling, a narrativized vibe. And if you don't agree that it isn't the thought, you are arguing that a huge swath of people, symbolically and linguistically intelligent, aren't having thoughts because they are choosing and thinking with subvocalization and inner narrative! It's like a third of people!

It's even been me when I'm performing at a high conscious level, the thoughts are lazy and unfocused compared to the unverbalized thoughts.

In this regard I hold the definition of thought to an even more sacred, hard to define space than you do, Gori, and even further from the AI. Because the AI "thinking" its way from "I want apple" to "Mott's juice!" is actually what Agentic AI is already doing within its parameters. Following a verbalized line of reasoning to an action is what it already does. It's clunky, mechanistic, programmed etc. But it's doing it, reactively until it concludes it is satisfied with the result (usually words come back that say "task complete" for a tool outside the LLM's scope that it's using).

But what's not doing is that active thinking. Living. Most of what we do is reactive. Most thoughts are narrativized feelings and urges and autopilot. But some what of what we do, real thinking, real decisions, real idea generation, is active and really hard to pin down exactly what it is. It is meaningful, and the AI doesn't have it. Not even if you birthed some agent with an infinite budget for compute, its still just stochastic to the god of dice. Our god doesn't play dice, except when our God does, but not as a rule. We're something bigger, and that's important.

But know how much bigger! Maybe a lot. I think a lot. But they thought language reasoning was advanced, that it was part of our moat against "true" AI, and it wasn't, and it turns out its really just some matrices over some matrices to get to our level.

*but once I got going... ;)
 
It's not crucial to me whether 100% of humans verbalize their thoughts, or whether that verbalization is the thought. With you, I think it probably isn't, and my reason for thinking that has to do with something that happens when I write. So, I'll often start out with some of the words that were in my head, but I reach a point (in my writing them out) where they trail off, and it feels to me that they trailed off at that same spot when I was thinking the thought. I forge on in my writing, along the trajectory provided by the words I have and I am able to finish the thought (often). That tells me that that ending point to the thought was there, and my internalized voice, slower than my thinking, didn't bother to finish out the thought because the thought itself had finished and there was now no point for the johnny-come-lately voice to keep tagging along.

At this point, I'm just using the schema I derived from Moriarte to test 1) which of those bubbles are important to my understanding of what falls under the definition of thought and 2) how well AI and humans do each one, and 3) are there any other bubbles. The constant percolating that our minds do (I don't care whether it's in words or not) when they aren't directly working on a problem seems to me to have no equivalent in AI. Until you ask it a question, AI is doing absolutely nothing at all. It just jumps into activity once you ask it a question, to deliver a response to that question. That has a bearing on the character of the verbal outputs it produces.

Anyway as to apple, mott, not, party. I do not think AI has chains of association like that, and one reason is that a "task complete" style of programming (which programming has to be) would not let it jump from Mott to not based on just a sound similarity. There's no outside place from which that can be predetermined as a "completed task." There's no time in its processing that it's going to ask "now think of every word that rhymes with that one." That's expansive, drifting. AI is always cinching down. I can do it because my percolating mind doesn't care about completing tasks; it's freed from that as a telos.

My own core task in this exchange (doesn't mean you're locked down to that) isn't exactly to determine how much better or bigger a thinker humans are than generative AI, but (right now) to walk one by one through the different thinking sub-activities and say which ones is AI doing and which ones is it giving the illusion that it is doing or isn't doing at all.

The actual driver for this (I had to remember how it is I had even stepped into this whole discussion)* was your claim that a team with AI will always outperform a team without it. Not, I think, if the team with it thinks it can surrender to AI the forms of thinking that humans do well. So that's a kind of practical question.

So that might take us back to the Gori vs Chat AITA-off. I do want people to say that my opening bit of wit (at least) is better in one specific way than Chat's. Chat has "Strap in; this Ghandi attacked" (civ reference). I have "Let me tell you how this declaration of friendship expired." (civ reference but more suited to the circumstances of the narrative). In the story, as it develops, she didn't attack; she just let a relationship come to an end; there's a civ reference for that specifically. If team A settles for "oh, Chat went and got us a civ reference, task complete," they never let themselves ask "is there a civ reference more apposite to this particular situation?"

*but now I'm in I realize I've been spoiling for this fight for a while. Already, through it, I've moved from a vague sense that there is something (crucial to thought) to a way more exact sense of which parts of thought AI might be doing and which ones not.
 
Last edited:

  • Sensation/Memory Trigger: You recall the taste, smell, or color of apple—perhaps from seeing one, or a similar round shape or red color.
  • Desire Formation: A vague, affective state arises—“I want apple.” It’s not yet specific; it’s more a craving for the idea of apple (sweetness, freshness, familiarity).
  • Association Expansion: The mind starts generating options linked to that concept: “apple juice,” “apple pie,” “an actual apple,” “that candy with apple flavor.”

ok include brands and real life final steps

ChatGPT said:​

Sure — here’s that same loop, grounded in real-world brands and end-actions:

  1. Trigger: You scroll past a picture of a shiny Honeycrisp on Instagram or smell cider at a farmers’ market. A sensory memory fires: apple.
  2. Desire: The brain flags it as appealing—freshness, crisp bite, maybe childhood lunches or fall weather. You just want apple-ness, not yet sure which form.
  3. Associations:Thoughts branch out fast:
    • “Martinelli’s sparkling apple juice sounds perfect.”
    • “Maybe a McDonald’s apple pie.”
    • “Or just slice up a Fuji from Trader Joe’s.”
    • “I could grab an Apple & Eve juice box—nostalgia.
it goes on.


So then, Apple -> Mott -> Nott -> Not Going to that Party

Sort of no, but actually in a way that's exactly what it's doing. What it's missing, not being alive regardless of its ability to think, is an ongoing background rumination, reaction, and stream of consciousness. Nor really any true continuity let alone autonomy. But with the right background rumination, programmed needs, minor amount of autonomy, and a stream of generation with needs/sense/previous "thoughts" as inputs, it would make those associations in that way. Because that's what it's already about, that's nearly what a large language model is doing. Turn the temperature up, the Top_P down, or vice versa, and a personality for brevity and not to be "performative" to a user, and it will do that.

I don't contend its our "being alive" in that way thats easily simulateable is what divides our thoughts from it, but something more.

I tried to push it to motts/ notts not go to party by trying to game it.


Screenshot 2025-10-08 at 5.40.13 PM.png



Nap'll. Rekt. :lol:
 
Last edited:
Oh my, what is being sold as AI (LLM) being good about logic and deduction? Someone still believes that?

LLM are incapable of that, the whole point of the technology was to get away from symbolic approches and all that, it is a different approach. And they producced a text mangler-generator. It's all it can produce. Serial fraudster Altman (who moved in from crapcoin frauds) keeps promising magical AI and failing to deliver, why do people still believe what he says?

There are no "line of reasoning" damn it, there are favoured ("trained") correlations. There is not even anything new about this. It's all hot air, trying to sell the thing that no one is actually seeing a profitable use for. What you are talking about is like wasting pearls on swine. Sorry, the singularity isn't coiming, that kind of science fiction will remain science fiction for the foreseeable future.

I don't understand why people keep using Chatgpt when Deepseek is faster, better*, and 100% free.

*At least in coding, which is the most important thing, but also in analyzing text and data, and drawing conclusions, in other words, when it comes to useful things. Chatgpt may mimic human conversation better, but there are already plenty of humans available for that. Supposedly, Deepseek also consumes much less energy, so it bothers Greta Thunberg a bit less.

But don't you fear chinese under your bed? :p using a "communist ai". Or are you starting to get realistic about using whatever is cheap and available?
 
Oh my, what is being sold as AI (LLM) being good about logic and deduction? Someone still believes that?

LLM are incapable of that, the whole point of the technology was to get away from symbolic approches and all that, it is a different approach. And they producced a text mangler-generator. It's all it can produce. Serial fraudster Altman (who moved in from crapcoin frauds) keeps promising magical AI and failing to deliver, why do people still believe what he says?

There are no "line of reasoning" damn it, there are favoured ("trained") correlations. There is not even anything new about this. It's all hot air, trying to sell the thing that no one is actually seeing a profitable use for. What you are talking about is like wasting pearls on swine. Sorry, the singularity isn't coiming, that kind of science fiction will remain science fiction for the foreseeable future.



But don't you fear chinese under your bed? :p using a "communist ai". Or are you starting to get realistic about using whatever is cheap and available?
lol! how do you even have the audacity
 
You've already conceded that AI is not thinking, so I know I'm belaboring a point. I'll say in a future post why I think it is worthwhile to keep worrying this bone. But then you do this thing that the enthusiasts always do: it's gotten so much better in three years, that soon it will deserve the label thought. I'm betting no on that. That these deficiencies are constitutional deficiences.

While y'all arguing whether AI is thinking or not, I am getting less and less thinking per cubic inch!!! a little anecdote:

I logged on couple of days ago to fetch another company report. AI was, until few days ago, very competent at creating a 30-40 page "Deep Research" document describing dynamics of a company: description, history, financial metrics, industrial analytics, geopolitics surrounding the company, few other bits and pieces with synthesis at the end. The ultimate goal is to present a case for and against (further) growth prospects. Manually, this mechanical, analytical, combinatorial work can take days and involves assembling information from hundreds of sources. With an LLM it takes exactly 15 minutes. So, one morning I ask for a report and AI returns me a couple of pages of bullet points assembled by 1st grader instead of a usual research-grade beauty.

Puzzled, I ask GPT if it got dumber overnight... turns out it did! Here is AI's own explanation of what happened:

Spoiler Disabled features :

Specifically:
  1. Hidden chain-of-thought reasoning — fully disabled.
    Deep Research no longer uses the long internal multi-step reasoning traces that used to guide very detailed analytical writing. Instead, it runs a short retrieval + synthesis cycle from curated data sources.
  2. Long-context memory reasoning — limited.
    The background research agent can’t hold and reason over as many pages of intermediate context (e.g., 30-page financial documents) as the older manual mode could. It works chunk-by-chunk and summarizes, so nuance is often lost.
  3. Iterative self-critique loops — disabled.
    Earlier internal builds ran multi-pass refinement (draft → verify → polish). Those loops are cut to one pass for speed/cost moderation.
  4. Cross-document inference — reduced.
    It no longer does full multi-source synthesis in one reasoning chain; instead it merges partial summaries.


In short, Open AI say: pay me $200 per month if you want your thinking accelerators back! Inconvenient, yet I am almost certain most of this work can be replicated by Grok or some other high quality LLM's at fraction of a cost.

The constant percolating that our minds do (I don't care whether it's in words or not) when they aren't directly working on a problem seems to me to have no equivalent in AI.

And of course you've sampled every one of several thousands of AI models to come to that conclusion... :)

AI doesn't need filler thoughts or mental decompression - a function and benefit of not having a body and organic brain.

One can program it, but, tell me, what's the use in having percolating thoughts in a bodiless personal assistant? Might bring a little novelty at first, but ultimately one will get tired of that "mode" - no, I don't think it's a worthy direction to mimic. Percolating thoughts can generate new branch of thought, that's true. But then here the mechanism (of percolating) is employed to stimulate memory to generate Something. Something out of "nothing". AI has far simpler methods of extracting from memory. Ours is spotty, energy-inefficient and often chaotic, provided by evolution. Computerised memory extraction method is more clean and scalable. That's another good thing in computerised simulation - we can get rid of "faulty code" in those we create.

Until you ask it a question, AI is doing absolutely nothing at all. It just jumps into activity once you ask it a question, to deliver a response to that question. That has a bearing on the character of the verbal outputs it produces.

Place AI in a robot body - suddenly it has things to do. It has to survive, worry about not harming those in immediate exterior. Then there' need to navigate, watch air pressure, temperature, measure wind, worry about fogging in the cameras, etc. Lots of things will change once AI is placed in a body. Forming of an agenda, local and global, etc. This will, obviously, have a huge bearing on the character of outputs.

The actual driver for this (I had to remember how it is I had even stepped into this whole discussion)* was your claim that a team with AI will always outperform a team without it. Not, I think, if the team with it thinks it can surrender to AI the forms of thinking that humans do well. So that's a kind of practical question.

Maybe we can all ask Hygro to rephrase "always" into "often"... That could save us a lot of trouble!
 
what's the use
Well, aside from the fact that I've already asked that question
Why would we even want a machine to do that kind of thinking?
(keep up!), the answer is this:
Percolating thoughts can generate new branch of thought, that's true
My biggest knock on AI (as I understand it without having sampled every variety) (or any variety, mostly) is that it is incapable of providing a new solution to an emerging problem, of problem-solving. And since I'm going to say that for me that's a defining feature of genuine thinking, then being able to do this form of thinking is crucial. And if percolating is the way to it, then percolating is important to thinking.
That could save us a lot of trouble
It's "trouble" I, at least, am enjoying quite a bit. Anyway Hygro gotta Hygro, and I wouldn't want it any other way.

"Nap'll" reminds me of one of my favorite poems:

Shake and shake
The catsup bottle.
None will come
And then a lot'll.

All you did @Hygro, is laboriously have AI reproduce my chain of associations, by telling it what to do at every link of that chain. That is not it having a chain of associations. No one from outside told me to jump from Mott to not on the basis of sound. Moreover, the end of the chain was not set in advance. My mind found its way to the thing it needed to think, not had that thing dictated to it as an end goal.
 
I think you are seriously underestimating how much you have background jobs telling you what to do at every link of your thought chains. You have a user in there

If you have a need to decide on your party, a personality to transition thoughts through wordplay, a desire for a current taste, you are being driven. I asked the AI to emulate a train of thought, and it found its own Mott and Nott within the dice rolls I prompted it. And I did that a lot faster than whatever organic process got your there, you prompted by years of yourself and outside language.

Moriarte put it well, put it in a body and.... it could well be we find value in having the bot run in an agentic loop with an LLM "in charge". But with a trillion needs in planning, preference, sensory needs, changing information etc, the current LLM personality and topic transitioning would have to be changed completely.

Because tool use has to be considered against status scores vs goals with effort expectations mediated by the LLM word cruncher/generator, and given that corrective loops done fast outperform one and done with quality, it would make sense then such an LLM would be geared to economize language while considering all its issues "Apple Snapple Nap'll No Party", easily could send words to subroutines that had more words running in quasi parallel -> I have that, I often have a sort of smear of instantly half vocalized thoughts in batches I can bring to the fore, sometimes a few at once.

I don't think the brain actually literally works this way. But I don't think the technology is fundamentally far away, it's just going to take a multi-modal approach. Almost all the advances since March of 2023 have been multimodal approaches and the difference between today and 2023 is incredible. I say "almost all" because the TRM model news I posted yesterday is huge.
 
I asked the AI to emulate a train of thought, and it found its own Mott and Nott within the dice rolls I prompted it.
Ask it not to emulate one, but to have one--its own!

Copilot: drift

Could you clarify what you mean by “drift”? Are you referring to:

  • Literary or philosophical drift (e.g. Derrida’s concept of différance or deconstruction)?
  • Drift in statistics or data science (like concept drift in machine learning)?
  • Drift in physics or engineering (e.g. time drift, signal drift)?
  • Drift in cultural or historical contexts?
  • Or something else entirely?
Let me know how you'd like to explore it!
I drift most minutes of most days, utterly unprompted. Or rather, altogether arbitrarily prompted.

Copilot: apple juice

Are you thinking about apple juice in terms of:

  • Nutrition or health benefits?
  • Production or processing methods?
  • Cultural or literary symbolism?
  • A specific brand or product?
  • Or maybe something more poetic or metaphorical?
Let me know how you'd like to explore it — I’m happy to dive in any direction!
It tells me it's happy to dive in any direction. And then it doesn't dive in any direction.

It just doesn't do this thing. That's okay. It doesn't have to. As Moriarte suggests, it would have little use.

But this is one activity that we label with the word thought when humans do it that it doesn't do. So if we're working our way through the "domains of think," here's one where it gets no bubble.
 
Last edited:
Well downstream of the possibilities of the tech itself, it is bounded as a “helpful assistant” response actor. But that’s not a tech limit that’s a product choice.

No it doesn’t do that thing… but the word association ability is something it has. The need to update our thoughts to ping our expected actions in realtime or rather interjecting “drift” into our speech is a trait of being a more complex system. But it will drift into weirdness just up the temperature setting on an old model. It’ll switch languages.
 
But it's not just weirdness that defines this kind of thinking, I think. Because at least some times the chain actually gets somewhere.

So the guy in my scenario was wondering whether he should bother going to a particular party. That was, like, an open question for him; he was going back and forth on it.

Hearing himself say the word NOT--emphatically so as to bring out the sound-play with MOTT--tipped the scales on that question, so he could say, "You know what, eff it, I am not going to that party." Now at that last link in the chain, where it becomes something that can tip that scale, it's just like a Rorschach image. The guy walking around with that question is just using something that randomly comes its way as the thing that tips the scales. But the apple-juice-choosing guy had no idea he was addressing himself to that question. The guy with the party question has a bunch of other stuff in his head; maybe the chain of associations set off by Motts is going to find itself useful in one of those other things. Or, mostly, these chains are not useful at all. The brain throws out thousands, maybe ten of thousands, of them in a day. A few of them meet up with a question that needs answering in a particular way or another thought that's percolating and become productive.

Later, incidentally, that mind, playing over same sentence hears different sound play: "I've never MET someone named MOTT." And real quickly that mind says "But I MIGHT." So the chain might have veered an entirely different direction: m--t rather than -ot -ot. That chain in fact went nowhere (unless to become an example, in an internet discussion, of a chain of associations that goes nowhere).

Edit: yeah, Moriarte also told me that Copilot's "personality" is "helpful assistant." Apparently Claude is more "thoughtful," and in Gemini you can build specialized assistants and Chat GPT allows the possibility of a "poetic soul" personality. I have an acquaintance who maybe has access to more of these, and I can run the same prompt and see if anything happens that resembles an authentic chain of associations. But my guess is that they won't. Or you can test one of them, Hygro. Clear Chat-GPT of all of your previous searches (what a hassle that is to do with Copilot), pick the poetic soul personality and put just "apple juice" and nothing more, and don't respond if it asks for more direction. It might well riff as though unprompted, but if it does, that is because "poetic soul" is serving as a second element of the prompt, and it will give fairly predictable (because programmable) "poetic" thoughts.
 
Last edited:
I understand totally your point, still I appreciate the verbosity, that's how we get to the bottom of it.

I've been playing around between tasks since yesterday in the openai sandbox. It's where you have a chatgpt like interfact but you can control the model and control the model parameters. You can also control (more of) the system prompt. Note that the system prompts these days are book length and control its outcome in the ways you would have thought are hard coded by engineers (tool use, avoiding topics, etc. its all stochastic gods, more human labor efficient that way).

Kind of similar to your suggestion, but not as full as building a customized agent.

Anyway, I've been juggling the system prompt, the model, the temperature which is like how stochastic it can be (so like, max temperature on the most personality agnostic, unquantized model quickly turns into language jumping nonsense soup saying code bits, chinese, emojis, back to english with no coherence. That is unless you lower the top P from perfect 1.0 freedom to 0.99 bounded in which case now it's bounded by word to word distance in its vector space).

What I find is that hyper quantized models like 4o just can't escape their personalities, and cannot free associate with poetic whimsical objective success. If we go to the freer models like 4-turbo (sadly you can't use raw 4 anymore), it jumps around but it can't "land it" once you open the temperature spigot. You would to alternate the temperature and the model (which, weirdly, would be more like a human brain. Experience taught me a long time ago our "mind" is sharing a lot of space with a lot of our minds, and be it a Fourier transform or else the supremacy of one of the minds, there is coherence internally as "one").

THE BEST I COULD DO for your scenario was leaving entirely behind the high temperature freedom (that was the second best) and going with GPT5, low berbosity high reasoning. And giving a really strict personality system prompt that also included a bit of the setting. Then it would take a BUNCH OF TIME reasoning how to get the outcome to match the "rules" of the prompt. Then it sort of did it. I tried asking it about this, advice. https://chatgpt.com/share/68e98093-8cb4-8002-bf1c-79d5ebc64528

Anyway I agree your point, it's not "Thinking", and not in "that way". And it "can't" until we build a multi-modal, multi-model agent, at which point, with current tech we could get its thinking analog to be more "that way"the actually word association and ability to be verbally creative exists — but it would be laborious and not the be a single model running free, and it would be expensive to run.
 
Back
Top Bottom