The AI Thread

Your "as if it might as well [have] been a hawk" is closer to expressing the second meaning than anything AI has said about it, so, allowing for the fact that you are (naturally) not committed to the same level of precision in spelling out the joke as I am, yes, you do get it.

Do you want to move on to "darned sock"?

It'll be tomorrow. Only stayed up this late for Kimmel.

All love and respect right back, my man.
 
That was just me fishing for somebody to say in so many words that it was a joke. I wanted that more explicitly stated by a human than just your "I get it" before I launched in to my analysis.
Everyone, including the AI, understand it's a joke.
It does not. A pun involves using a word or phrase in two separate senses. To explain a pun, one designates separately the two different senses. In this instance, Copilot never gives the second sense "watched [the unidentified bird] as though [I was choosing to treat it] as a hawk." Entities that are accustomed to working with the meanings of words can conduct this kind of analysis. Entities that are not cannot.
It. Literally. Does. That.
Self-referential wordplay: The joke hinges on the ambiguity of whether you're describing your own behavior or making a sly nod to the bird itself.
"a sly nod to the bird itself" => "I'm watching it as it is an hawk"
You are the one who seems to have trouble getting meaning. According to your own criteria, you are then not able to think at all.

As an aside : lots of people are dense enough to not get a lot of jokes. That you're so desperately latching on the smallest hint of an AI not spelling out the most minute details in a joke as an absolute evidence it doesn't understand anything, just means you're applying to it standards you wouldn't apply to humans.
My position is Thorgalaeg's and Moriarte's. We attribute to the productions of generative AI the thinking processes with which we are familiar (our own).
No, that doesn't seem to be their position, which seems to be closer to mine actually.
Your position is that as AI haven't experienced the world like humans, and doesn't have emotional reactions like humans, it doesn't understand the concepts it expresses.
Mine is that thinking can take different forms, and we don't even really know how we or an AI does think in the end, and we can't compare both from within, so checking the end result is the only way to actually gauge thinking.

Basically, you're arguing from a position that for an AI to be able to understand, it needs to "feel", while I'm arguing from a position that it needs to "think".
Another example is a parrot. When it says "Polly want a cracker," we might initially interpret it as "man, that bird can talk and is expressing its desire for a cracker; I should give it a cracker." When all it ever says is "Polly want a cracker," we realize that it has picked up that phrase and is repeating it without reference to what those sounds mean to a person who would use them with signifying intent. AI is a super sophisticated parrot, and since we're not accustomed to textual passages being generated in any other way than by a meaning human, we attribute to the textual passages it produces the same mechanism (ours) for producing those passages: "It's thinking!"
Except you can ask an AI to explain what it means when it communicate with you, and it can answer, which immediately throw the whole comparison out of the window.

I understand the underlying point : that it's not because it gives the illusion that it means there is a real deal behind. But my retort is precisely : by asking about the answers, you can narrow down the supposed illusion, and if it can navigate all these questions by making consistent and coherent answers, then what can be your argument it's still an illusion ?

Speaking of questions, I'm going to ask again a pretty important one that you seem to have ignored :
You claim it doesn't and it only "mimicries", and I ask : at which point does the "mimicry" becomes the real deal ? Conversely, before which point is it only mimicry and not the real deal ?
Because so far, you haven't actually been able to argue about "it doesn't think", you've only argued "it doesn't think the exact same way with the exact same reactions as a human". Which basically means you will only consider that anything is able to think if it's a human, which ends up in circular reasoning where only humans, by definition, can think.
 
Last edited:
sly nod to the bird itself.
"a sly nod to the bird itself" => "I'm watching it as it is an hawk"
"sly nod to the bird itself" =/= "I am watching it as if it is a hawk" The first one just requires the observation that I have used an idiom involving the word "hawk" in discussion of a circumstance in which a hawk might have been involved. The second one involves knowing the meanings of words.
lots of people are dense enough to not get a lot of jokes.
Right, and when we do, we say they don't understand them. (That was why it was so important to me, as we launched into this discussion, to get you to say that you understood this joke) And that--that AI does not understand jokes that depend on the meaning of words--was the very point this case was adduced to evidence. Here we can identify pretty precisely what it is that blocking AI from understanding this joke: it doesn't work with signifieds, just signifiers.
"it doesn't think the exact same way with the exact same reactions as a human"

Mine is that thinking can take different forms
To that extent, it's just a semantic quibble between us. But the very proposition I'm arguing against is the claim that it does operate just like humans. The shorthand for that (when people are making the claim that I regard as overblown) is usually for people to say it "thinks." I think the more accurate phrase would be that it "compiles texts" (because I think the word "think" tends to cover lots more activities than just compiling texts). But whatever, the core point would be to say that it does that activity differently from how humans do.


at which point does the "mimicry" becomes the real deal ?
Sorry, I had meant to specifically ask if we could set that broad philosophical question aside for a time, while we treated more focused matters.
 
Plus whatever one Hygro works with. Which is presumably top of the line. Because, you know, Hygro.

Pretty sure Hygro uses the $20 worrh of GPT, same as me. There’s also a $200 version, which allows for a ton more compute on user’s side. Also, there are “surface version” and “deep think”. Hygro didn’t use deep think from what I saw on the screenshot. The latter would yield a far more comprehensive answer. And then there are specialised models for writing-entertaining ideas (probably the one best suitable for the task), coding, drawing, etc.

For what it’s worth I am enjoying this conversation. Hopefully it continues in the same spirit.
 
"sly nod to the bird itself" =/= "I am watching it as if it is a hawk" The first one just requires the observation that I have used an idiom involving the word "hawk" in discussion of a circumstance in which a hawk might have been involved. The second one involves knowing the meanings of words.
That's... just flatly wrong, the sentence can (and does) totally mean that. You're deliberately chosing to interpret it in a different way but that's just your choice, not what the words mean - which is deeply ironical considering your entire argument is that the AI doesn't get the "meaning", and for the second time I'm pointing that YOU are the one missing it.

Just to rest this case because it's hair-pullingly annoying :

ai.jpg


Emphasis mine.
Right, and when we do, we say they don't understand them.(That was why it was so important to me, as we launched into this discussion, to get you to say that you understood this joke) And that--that AI does not understand jokes that depend on the meaning of words--was the very point this case was adduced to evidence. Here we can identify pretty precisely what it is that blocking AI from understanding this joke: it doesn't work with signifieds, just signifiers.
This is just maddening.
So you recognize that many humans don't get such joke.
You argue that not getting such joke is adamant proof that AI can't think/understand.
I specifically pointed that according to this reasoning, then many humans can't think/understand. You seem to have simply blocked/ignored this. Care to actually take it into account ?
To that extent, it's just a semantic quibble between us. But the very proposition I'm arguing against is the claim that it does operate just like humans.
I'm going to repeat what I said from the beginning, because it seems we haven't advanced a single step since then :

How do you know ? Have you been in the "mind" of an AI to see the difference ?
I mean, nobody know how our own minds actually work. The very people who program AI admits that they don't really know how it comes to the conclusions it does either. As I pointed in my previous posts, our brains are also, fundamentally, a big computer that receive stimuli provided by sensors, store data in its memory, and process all that in its own self-contained bubble.

And anyway, let's say you're right and our minds work fundamentally differently. How does that alone proves that AI aren't able to "understand" ? That it's different doesn't mean it's less capable (if anything, it's becoming more capable in a fast-increasing array of subjects).

The shorthand for that (when people are making the claim that I regard as overblown) is usually for people to say it "thinks." I think the more accurate phrase would be that it "compiles texts" (because I think the word "think" tends to cover lots more activities than just compiling texts). But whatever, the core point would be to say that it does that activity differently from how humans do.
What does it means to "think" then ? Because you seem to just dance around playing on semantics here.
Sorry, I had meant to specifically ask if we could set that broad philosophical question aside for a time, while we treated more focused matters.
When your entire point is that the AI don't understand meanings and your whole arguments are about nitpicking its answers (in ways that would prove many humans also aren't able to understand meanings), I don't see how getting actual criteria that should be met for you to determine who can and who can't express the ability to understand meanings, could be set aside. That's basically the entire basis of the debate.
 
Last edited:
For what it’s worth I am enjoying this conversation. Hopefully it continues in the same spirit.
Me too! What was hardest for me was when you implied that my "eat rocks" joke meant that I wasn't interested in rational discussion.

I am. I just like throwing jokes in the mix when I'm having one.

You know. Because I'm human and we humans enjoy jokes. It's one of the things we do with language.

On to you in a bit, Akka.
 
To that extent, it's just a semantic quibble between us. But the very proposition I'm arguing against is the claim that it does operate just like humans. The shorthand for that (when people are making the claim that I regard as overblown) is usually for people to say it "thinks." I think the more accurate phrase would be that it "compiles texts" (because I think the word "think" tends to cover lots more activities than just compiling texts). But whatever, the core point would be to say that it does that activity differently from how humans do.

But it doesn't just compile texts. It actually thinks in real time and the train of thought can be tracked live. I believe Deep Seek is the most open AI in that regard (and free). So you ask a question, open a think tab and start reading. AI talks to itself after receiving prompt. It starts with what it thinks is the notion at hand. Then, as thought progresses, AI finally stumbles upon evidence contradictory to their initial understanding. It says "wait, I have to correct my thought sequence to account for new evidence". Then it proceeds to recalculate all the logic fallacies that could occur along the way, keeps thinking and thinking. It does inductive & deductive logic, and, seemingly, many other forms of reasoning.

Ai synthesises new words from separate domains, sometimes in English, but more often in other languages (for some reason). More often than not they sound good, which blew my mind, frankly.

Ok, my attempt to categorise AI strengths and weaknesses in particular domains of think:

Strongest: in induction (pattern learning) and deduction (formal rules).
Developing: abduction (proposing plausible explanations), analogy (map patterns between domains, heuristics (reinforcement learning with exploration rules).
Weakest: in dialectics (contradiction, synthesis of opposites), reflection, embodied intuition, creativity.

If we were to zero in on creativity, I would say that your general argument that recombination is not true human imagination is, at least in part, valid. In my observation humans can jump further between distant conceptual domain than AI. We are more crazy!

Furthermore, while AI recombinates ingredients masterfully, Human mind generates new conceptual frameworks that can redefine what “ingredients” even are.

So, on the basis that so many facets of thinking actually exist and mirror our own, the sum of those facets place human and AI into a comparable ballpark. Human stronger in some domains, AI - in others. All that before the agency discussion kicks in.
 
Furthermore, while AI recombinates ingredients masterfully, Human mind generates new conceptual frameworks that can redefine what “ingredients” even are.
Perfect! There is zero substantive disagreement between us, then.

It is this that is the highest level of thinking, and it's why I want to deny AI that term. If we say that AI "thinks," riding along with that word are all the kinds of things humans have been able to do with their minds. But if AI in fact can't do very well or do at all some of the things that we mean by "think," then we've misrepresented its capacities by using that word. We need to distinguish (as you've done very well here) its specific capacities and limitations.

It's also funny, by the way, that you mention "embodied intuition." In order to try to substantiate my claim, I had a hunch that puns would be the way to do so. (I have another hunch that I'm working on, so that we can shift gears from my hawk pun that maddens Akka so much). I'm not conscious of the source of that hunch being my embodiedness, but it was an intuition. (And I do think bodies are going to turn out to be crucial to full-fledged thinking).
 
Last edited:
the sentence can (and does) totally mean that.
It can mean that; it does not necessarily mean that. It's a question of specificity. When somebody gives an answer in a vague way, you often drill down to specifics--even if, at that level of generalization, the statement is accurate--to see if they really have an understanding of the thing in question. Here, as soon as you ask it to spell out with specificity what the second meaning is, it fails. To spell out what makes this funny, you have to say, "even though you couldn't determine which type of raptor it was, you picked one that it might have been and watched it as though you had determined what type it was." The AI statements you cite above do not say that, don't say it fully, don't say it with the full level of specificity that would let us know it was understanding the meanings of the words.
I specifically pointed that according to this reasoning, then many humans can't think/understand.
In the case of the pun, we are using it as a particular kind of statement to expose a general limitation in how AI "minds" work: that they are not working with the meanings of words. That humans work with the meanings of words is not a point in question. So if a particular human doesn't get this joke, it doesn't serve as an illustration of a trait that is generally true of humans.
I mean, nobody know how our own minds actually work.
But we know some ways they don't work. When I give you an answer to a question, I don't do it by going and searching millions of internet statements on that topic.
 
Last edited:
It is this that is the highest level of thinking, and it's why I want to deny AI that term. If we say that AI "thinks," riding along with that word are all the kinds of things humans have been able to do with their minds. But if AI in fact can't do very well or do at all some of the things that we mean by "think," then we've misrepresented its capacities by using that word. We need to distinguish (as you've done very well here) its specific capacities and limitations.

A philosopher questioning the nature of truth rates higher in my book than creative genius. I place reflective thinking above creativity in your pyramid. The facets aren't static too - they undergo constant evolution in the hands of programmers and cognitive scientists. Wouldn't be fair to say that some of the facets are missing entirely from AI. Represented in different proportions compared to human mind, maybe. But all facets are present, because human thinking was the template for AI creation.
 
A philosopher questioning the nature of truth rates higher in my book than creative genius.
Well, now, them's fightin' words.

I'd love to see a case where you think AI "generated a new conceptual framework that redefined what “ingredients” even are."

A little more on embodiedness. One thing AI can't do is write metered verse. To do so requires feeling words (in English how much stress they have, basically, though it's not quite as simple as that). Even at the level of the signifier, AI works mostly with the written signifier, and not the spoken one.
 
Last edited:
I'd love to see a case where you think AI "generated a new conceptual framework that redefined what “ingredients” even are."

Would you hug AI and welcome it to human family if I did?

What you're describing is cutting edge, beyond the horizon. Do we want AI to start conceptualising such frameworks is a prerequisite question we need to answer. And if we let it, how many of us will be able to understand the answer? Creating "black box of reality" is appealing for some, but no-no for others.

If AI becomes a “framework generator,” it could be our most powerful ally. Or the point at which human agency in knowledge creation slips away.
 
Would you hug AI and welcome it to human family if I did?
I can't hug AI. It doesn't have a body.

I would refine my sense of what I regard as its core limitation--a constitutional limitation, as I understand it: to innovate.

More to say at some point on the question of "would we be able to follow it?" and "should we let it do this?"
 
It can mean that; it does not necessarily mean that. It's a question of specificity. When somebody gives an answer in a vague way, you often drill down to specifics--even if, at that level of generalization, the statement is accurate--to see if they really have an understanding of the thing in question. Here, as soon as you ask it to spell out with specificity what the second meaning is, it fails. To spell out what makes this funny, you have to say, "even though you couldn't determine which type of raptor it was, you picked one that it might have been and watched it as though you had determined what type it was." The AI statements you cite above do not say that, don't say it fully, don't say it with the full level of specificity that would let us know it was understanding the meanings of the words.
Frankly, that's just grasping at straws here, and it's becoming painful. As I said several times already, if we were to apply such absurdly specific requirements to people, most mankind (probably all mankind actually) wouldn't pass the test. Which makes for a very, very, very bad test.
In the case of the pun, we are using it as a particular kind of statement to expose a general limitation in how AI "minds" work: that they are not working with the meanings of words. That humans work with the meanings of words is not a point in question. So if a particular human doesn't get this joke, it doesn't serve as an illustration of a trait that is generally true of humans.
=>
Because so far, you haven't actually been able to argue about "it doesn't think", you've only argued "it doesn't think the exact same way with the exact same reactions as a human". Which basically means you will only consider that anything is able to think if it's a human, which ends up in circular reasoning where only humans, by definition, can think.
But we know some ways they don't work. When I give you an answer to a question, I don't do it by going and searching millions of internet statements on that topic.
=>
1) Not consciously maybe, but you've, by definition, absolutely no idea how your unsconscious works. In fact, we have absolutely no idea about how the minutiae of our thought process work. When I'm trying to remember something, or when I'm trying to phrase some idea, every "low-level" work is completely hidden from my conscious mind, and I don't know how many neurons are working together to produce a global potential that brings the data to the part of my brain that process it - or if there is even such distinction or not.

2) You have samely absolutely no idea if an AI consciously does such statistical work or if it is just the same sort of nebulous overall data "cloud" that surface to be processed, or if an AI is conscious or not to begin with.


I think I'm going to give up, it's just going in circle here, hence how I can basically copy-paste previous answers that were never actually treated.
 
if we were to apply such absurdly specific requirements to people, most mankind (probably all mankind actually) wouldn't pass the test.
I actually thought about a stage where I asked you to spell out the second meaning that makes up the pun. I now regret that I didn't do that. I feel certain you would have been able to do so.
you haven't actually been able to argue about "it doesn't think", you've only argued "it doesn't think the exact same way with the exact same reactions as a human".
But see my exchanges with Moriarte. This is coming down to a definition of what the word "think" should mean, and I've given my reasons for wanting to define that word in reference to the highest levels of intellection that humans can manifest.
absolutely no idea how your unsconscious works
But I know one way that even my unconscious doesn't work, and that is consulting millions of preexisting texts on a subject. Plus, I know some ways it does: hunches, e.g., embodied thinking (sometimes you get the rhythm of a poem first, and only later fit words to that rhythm.)
 
domains of think
I've been reflecting on this phrase, and I think that (in addition to being a very useful formulation for me) it is a perfect example of a thing that human minds can do, and that I have yet to see evidence that AI minds can do. In three words, it opens up the possibility of a "new conceptual framework" (and then your chart afterwards fills that out).

Here's what I mean. It does several things. 1) It takes a word--the key word that we have under consideration, think--and it nudges it from the singular form in which we tend to conceive of it to a plural form. This it does through the use of the plural in "domains." Is that in itself ground-breaking? Of course not. That could be achieved, in a particular conversation, by a person saying "well, there are actually a lot of different activities lumped under the verb to think," and the interlocutor saying, "well, yes, of course." But still it's a useful move, because the fact that "think" sounds like it is one thing is the very thing that needs to be resisted in this case. 2) More pertinent is the metaphor in the word "domains." Domains are territories, so they're the kind of concrete thing with which our minds operate more easily. This is what almost every metaphor does: takes something abstract and connects it to a tangible, concrete item--because our (embodied) minds have day-to-day experience working with concrete items. So now we can picture (at least vaguely) the different kinds of thinking our mind can do as different nations on a map, say, and our minds are getting prepared for the chart, which will also lay things out in that fashion.

What I think is really crucial to the effectiveness of this phrase is 3) the ungrammaticality of "domains of think." The "proper" way to say it would be to say "domains of thinking." Or one could understand there as being some omitted material "domains of [what we mean when we use the word to] think." Doesn't matter. Either way the ungrammaticality makes us perk up. But this particular ungrammaticality has a function, which is in effect to call attention to the fact that people generally use the word think as though it names a single action. So it jolts us out of our conventional way of considering the topic in question. It also shifts the precedence--I will say--from the singular conception to the plural one, because it's the singular that is the "mistake."

I don't know if you've used it before or invented it for this discussion, but regardless it has the effect of making certain concepts more easily thinkable, and maybe even really-thinkable-for-the-first-time. It has that full effect because you gave yourself the permission to think/talk outside of the (grammatical) conventions that limit (though they also enable) that thinking/talking.

I think it does one thing more, in addition to all of that, but I'll stop here, and say that I don't think AI could have floated the phrase 'domains of think,' let itself have that permission, to think outside of the guidelines that enable-but-also-confine its operations.
 
I read the Wray article. I mean I read it a dozen years ago as well. It's really not that pertinent. What if we're measuring efficiencies wrong? Very important topic, the corn example highlights why thinking macroeconomically is smarter than cost per unit produced in a vacuum. It's a little strange because the foundational economic text is on specialization, there should be two goods Ricardo style, but that's ok, his point is good. Obviously not relevant to AI but you highlighted his his doctor example.

The doctor example is particularly backward in our comparison. The doctor spends less time with the patient because the doctor has to spend more time with paperwork. That's the problem he highlights, he's highlighting inefficiency caused by bad institutional impositions or societal methods of financing medicine. The efficiency here is that the doctor spends LESS TIME doing paperwork, thank you 1000 AI companies that will collapse to 3, they have more time to treat patients. Jevon's paradox will find us, but the initial promise is bearing true: this is a time saving technology, saving us from tedium, giving us time for more throughput which in a doctor's case is either a) more patients or b) better service for same number of patients. Both of those are net wins. Already have a doctor friend whose notes are written faster, of higher quality, and inputted into the correct format, by the passive inclusion of AI in the room.

My reading of Wray's argument is significantly different than yours appears to be. He mentions paperwork, but also specifically mentions doctors being under pressure to increase throughput.
You appear to agree with our hypothetical doctor's boss that increasing throughput is desirable, that it makes sense to apply a quantitative measure of efficiency to a doctor's work, and that the real problem is doctors doing paperwork rather than seeing patients.

The analogy with writing is apropos. In making messianic promises about the technology's transformative potential, the LLM prophets are (mostly) inappropriately applying engineering terms to humanistic processes they don't understand.
 
Furthermore, technical efficiency is a sex cover for financial efficiency...
 
My reading of Wray's argument is significantly different than yours appears to be. He mentions paperwork, but also specifically mentions doctors being under pressure to increase throughput.
You appear to agree with our hypothetical doctor's boss that increasing throughput is desirable, that it makes sense to apply a quantitative measure of efficiency to a doctor's work, and that the real problem is doctors doing paperwork rather than seeing patients.
Now I said more or better throughput, so if a doctor is stuck 30% in office dealing with insurance for something that an AI can automate easily and safely, you free them from that, they still have another 20% they can't save, that 50% goes to 80%. Both outcomes are good: either more patients, or more time with the same patients, cost the doctor equally in their workday (i.e. "free" economically speaking).

Wray also says this:
This is also happening on university campuses, of course. Professors reduce their office hours—or skip them entirely—and send students to the much cheaper teaching assistants as the efficiency fairies work to preserve more time for faculty to spend doing all the paperwork required by a burgeoning administrator staff that has nothing better to do than to create new paperwork requirements.

But this is also where AI is a great use case to free time to go back to office hours. A chatgpt window will do a lot, but the real sauce is an agentic system that takes the paperwork who/what/where/when/why, does it all for you, you just review and sign off or make edits in literally slack, discord, whatsapp, text etc. And the administrators are going to be doing the same so it's a closed system. Costs a little money, runs a lot of computers, frees humans to return to core product aka value aka in the university's case, the core mission of educating young people's minds to be critical thinkers with advanced tools of thought.

AI is specifically good for the bullsht work killing ADHD people like me. The prof still has to write a grant to a real board. AI can help 50% there with the boilerplate and time management of that task, but it's not the full use case, that requires more critical attention. The prof and team still needs to write their own scientific papers. AI can help there, but the real work is left.

But that's the fun stuff! Writing the true nature of the research for the paper is the fun part. That's why you do it!

There's that joke "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes." Agreed. Also if you don't do your own laundry your body atrophies, ADHD symptoms intensify, but that aside. I agree.

But the cool thing is, AI isn't doing art and writing! Take music. 30% of new music uploaded is AI. But it only gets 0.5% of streams, and 70% of those streams are bots helping run criminal streaming fraud. AI isn't replacing art, not yet, not meaningfully. AI isn't replacing writers.

It replaces is all the fatiguing grind that's stopping us. No, not the laundry. And laundry's a nice break. It's replacing boilerplate, data repetition, all the high attention tedious mindless stuff that kills creatives from finishing. It's also raising the stakes so it's not only good, there's some mammon, there's some Jevon's Paradox. But a writer can now use AI to organize and automate parts of their paperwork life that was a huge pain before. It's great at taxes! Great at scheduling, great at building spreadsheets, and great at reading them. Great at a lot of things. It's great at math, and it's great at coherent arguments, two things it was terrible at 3 years ago.

But it's not the best at coherent arguments, so it's still fun to be an arguer! I wish more people at CFC used AI behind the scenes to help them perfect their own arguments (but not copy + pasting!! 😤) just to level this place up. It's low hanging fruit. We got a good mix views and personalities, and people are only here "for fun" so it's still gonna be them. Just like I love when people research their own posts, even just a top google search or wikipedia. Sources are nice, don't need to cite them as long as the post is good. And everyone loves when someone had a great college class and for like 2 years they have amazing topical zingers and in depth arguments, it raises the bar and the fun for basically everyone. No reason not use AI search and critiques to really drill in, before writing whatever. The point is all tools / experiences are good behind the scenes here, none replace the debate.

You won't see from the outside, but the use cases are very diverse (just like http) and are strongest away from that initial hype where they showed it could produce artlike images and could output a story. Those elements are sideshows, can't replace artistry, the tools to authentically augment artistry are still in the works and the currently it lacks any ability to precisely see through a specific vision, a critical tech breakthrough outside of the current GenAI paradigm.

The analogy with writing is apropos. In making messianic promises about the technology's transformative potential, the LLM prophets are (mostly) inappropriately applying engineering terms to humanistic processes they don't understand.
Everything I've said across many huge posts has told you the indstury is trying to apply humanistic, natural language interfaces to solve engineering and business process problems. So it's really like, the opposite — and I know some real nerds who were hype on AI imagery being valid for consumption that were hyping it, ew — the real thing isn't applying engineering to human stuff but a human interfacing thing to engineering / business / impersonal stuff.

I'll give you an example: In biotech they do a ton of work that generates a ton of data. Right now making that data human useable takes as much time and human as generating that data. But a complex tool using multi agent system cuts that time and human effort down 90% for the cost of 1-2 employees. If the bubble crashes and everyone pays full cost for their own chips and it's never subsidized again and there's no meaningful updates to the tech from today, then its reducing human effort 90% for the cost of 10 employees. But in today's real world, where it's 2, you still have to read the stuff once it's applied, it'll shift more resources into science aka data generation, which uses more of this, but funding remains the same, so instead of doubling research, the math says +25%-33%, and realistically boosts research "only" 20%. That's incredible.

At a standard 100 employee biotech startup that means hiring another 30 scientists. DO U LIKE MEDICINE? Yes ✅ It's also just trend line for capitalism, but for the trend line to continue we need constant breakthrough technologies. This is a breakthrough technology. Breakthrough technologies in my lifetime that have reached *me*: LLMs, The internet, probably weird advances in concrete that I take for granted, MRNA vaccines, GPUs, I dunno probably forgetting about 4 others. All of these things are required for the real economy to grow 3% a year (or 8% if we had good governance).

I've noticed a trend about people who are extra hard skeptics. A few trends but I really want to highlight that most of the people I know who think AI sucks thinks it sucks because they think we already have much better technology than we actually do. Almost all of the internal business applications of LLM, which is its revolutionary power right now, are things people thought we already had and did 20 years ago, especially 10. But we didn't! The things people assumed was in place didn't exist. We lacked theory, we lacked computing power, we lacked putting a million engineers on the job.

Like people think Google was using semantics like this for search 10 years ago, or facebook was to find your interests. Nope, mostly just more basic relationships and extensive tagging. People think big tech has been spying on everyone and using ML algorithms to figure out what everyone does only vs everyone else to know everything about you. Nope. Dead people showing as alive, duplicates, Google with all my data thought I was a middle aged woman for a while when a young man (they had access to my facebook url history and I had no clear reason why). LLMs are already sussing that I'm Hygro and suggest many correct social media platforms to sniff me out. Using an LLM may cost more in compute than more basic crawlers, but they get it closer to right, more comprehensive, better organized, better asterisked ("this might be someone else with same name"), and can just iterate on people and save that info. We still aren't being meaningfully tracked like people imagine (people image perfect tech and implementation based on the current tech possibilities they read about like "surely if I can imagine what they made, they already did the thing"— nope).

The reason this stuff is so big and important and new and exciting is myriad, but it's important to remind a huge portion who thinks the computer tech from movies is real, one of the biggest reasons it's so big is how little we actually have, and how much this one does that nothing did before.
 
I actually thought about a stage where I asked you to spell out the second meaning that makes up the pun. I now regret that I didn't do that. I feel certain you would have been able to do so.

But see my exchanges with Moriarte. This is coming down to a definition of what the word "think" should mean, and I've given my reasons for wanting to define that word in reference to the highest levels of intellection that humans can manifest.

But I know one way that even my unconscious doesn't work, and that is consulting millions of preexisting texts on a subject. Plus, I know some ways it does: hunches, e.g., embodied thinking (sometimes you get the rhythm of a poem first, and only later fit words to that rhythm.)
Yes, I agree, AI doesn't think.

But also I agree with Akka regarding your joke. The AI explained it. The AI explained semantically equivalent to your explanation, to my explanation which you validated. For whatever reason you don't like its awkward goober phrasing. I don't think everyone's going to get the joke, I think the pun beats the joke by a mile, and the AI was correct to land there first, but still lands on the joke-joke second and explains it.

I have friends who would get that joke and explain it the same lame way, correctly, after they laughed, that somehow would completely ruin my trust that they actually got it, like that human connection between us through the joke. AI is a goober like that.

Your complaint is that it fails in saying it got the joke in a way that validates the reason and connection that one would make such a joke. To spark alive the universe and your moment together. But also like, thank God. AI is folding your joke laundry, you are now free to share it with people and not worry about AI. Is that pointing toward the Utopia we're promised?
 
Back
Top Bottom