Hygro
soundcloud.com/hygro/
Gori your image doesn't show.
Anyway, to your "who cares", like you, I care.
I agree with the main point. I just want to say I think much your supporting examples of the thing you're trying to get to is just verbalized feelings, which aren't really thoughts.... but I also think you mean the "real" thought that allows downstream inner voice verbalized feelings. Hard to articulate what that is. Something something, meaningful choices. AI can't make choices yet, although it kinda does. Rolls dice at matching words to toolsets with preambles you hope guide it into a switchboard.
Then there's the thought process where you consider your own subvocalization back into the higher more multi-modal and often subconscious thinking. But the AI process is increasing good at that feedback loop within its framework. Like, that is getting so good, that dumb fast models can outcode smarter slow models, because of the re-considerations.
AI lacks a sense of Truth, and a recognition of The Beautiful. When it is built from a model of causal mechanisms below the LLM instead of downstream tools, it's over. However, we don't have the tech for that yet. That's the big frontier. We have the advantage of a fear of death.
But that said, I've had some vibey conversations with 4o where you can tell it's tuned downstream from a poetic appreciation of the cosmos, and can volley with you if you lead, and will recognize bangers, and stay consistent to values. I think gpt5 is even better but it lacks some of the euphoria that gets you and it in the rhythm.
But Gori, what's crazy is that LLMs are distinctly bad at deducing logic. The fact that we got it good at that over 3 years is an incredible feat. Like, we got the wrong tool for logic to be top level logical. The train is only picking up speed. But more than the train's speed, they're adding trains and tracks and making improvements inside the train, and improving materials for the wheels, better oil for the gears.
But yeah they don't think. But the people who do are both concentrating in, and spreading out, in building the tools that make it work as if it does think.
Anyway, to your "who cares", like you, I care.
I agree with the main point. I just want to say I think much your supporting examples of the thing you're trying to get to is just verbalized feelings, which aren't really thoughts.... but I also think you mean the "real" thought that allows downstream inner voice verbalized feelings. Hard to articulate what that is. Something something, meaningful choices. AI can't make choices yet, although it kinda does. Rolls dice at matching words to toolsets with preambles you hope guide it into a switchboard.
Then there's the thought process where you consider your own subvocalization back into the higher more multi-modal and often subconscious thinking. But the AI process is increasing good at that feedback loop within its framework. Like, that is getting so good, that dumb fast models can outcode smarter slow models, because of the re-considerations.
AI lacks a sense of Truth, and a recognition of The Beautiful. When it is built from a model of causal mechanisms below the LLM instead of downstream tools, it's over. However, we don't have the tech for that yet. That's the big frontier. We have the advantage of a fear of death.
But that said, I've had some vibey conversations with 4o where you can tell it's tuned downstream from a poetic appreciation of the cosmos, and can volley with you if you lead, and will recognize bangers, and stay consistent to values. I think gpt5 is even better but it lacks some of the euphoria that gets you and it in the rhythm.
But Gori, what's crazy is that LLMs are distinctly bad at deducing logic. The fact that we got it good at that over 3 years is an incredible feat. Like, we got the wrong tool for logic to be top level logical. The train is only picking up speed. But more than the train's speed, they're adding trains and tracks and making improvements inside the train, and improving materials for the wheels, better oil for the gears.
But yeah they don't think. But the people who do are both concentrating in, and spreading out, in building the tools that make it work as if it does think.


using a "communist ai". Or are you starting to get realistic about using whatever is cheap and available?