The AI Thread

You make the AI sound so good Hygro, that you are almost superfluous to the creation process.
That's precisely what AI means to humanity, and that's what is so frightening about it.
We're on the way of making ourselves obsolete, and depending on if and how the singularity happens, giving literally unthinkable power in the few hands that own it.
Hygro, I'm glad you're finding that AI tools are helping you with your artistry. I went to your sound cloud site before you were using AI and liked your stuff, so I wish you only the best. If AI tools are going to help you bring it to the form you want, more power to you.

But you made a general claim that the team working with AI would beat the team working without it, and I think that just isn't true. Hard to imagine how we might arrange an art-off, but I'd put my money on the team not using AI to deliver something more genuinely artistically powerful.
Maybe for now.
How long before AI is actually able to outcompete human even in art ? (most probable answer : not long at all)
Just a little reminder :
AI takes only a handful of years to go from "much worse than human" to "better than human".

I just can't listen to people saying that AI can't replace humans for such or such task, and not hear instead the stories we want to believe about how a modern factory can't produce an item of higher quality than a forgesmith.
 
Last edited:
No, I don't use generative AI. If I want to write something, I'll just write it.
In the challenge, you don't use AI. You give an assignment to both myself and Hygro and judge the results.
 
Put AI on charge of governments. Give it "humane laws" to limit/parameter it's decision making. Put all these useless bureocrats out in the street! End corruption.
 
Morning thoughts on the Gori v Hygro + Chet write-off.* The genre the two parties should be challenged to compose in should be the "thoughtful discussion board post."

Lex (if he comes to understand his role and is willing to serve as assignment-giver and judge) knows that genre well. He knows when a particular post frames an issue in a valuable way, or offers useful insight or valuable information or perspective. He reads dozens of such posts each day and sorts them into (agree/disagree, of course, but also) interesting/banal.

Let me give an example of a possible such question: What should Democratic members of the House and Senate have done rather than vote for National Day of Remembrance for Charlie Kirk?

I have my own answer. I've been waiting for someone to mention the vote over in the Charlie Kirk thread so that I can give my own answer. (So I'm not proposing this actual question--that would be stacking the deck too much in my favor--just one like it is all I'm arguing for). It's a kind of challenging question: Democrats gain nothing by coming off as heartless, but also don't want to honor the guy. So, it allows for a more or less thoughtful reply. Chet has more readily at its fingertips the whole range of possibilities. I have just my own cleverness. So maybe H+C could come up with a better answer than me; maybe I could come up with a better one than them.

That's what we want in the right kind of assignment: something that it's possible for Chet to use its special kinds of resources/advantages but for a human to use its kind of resources.

*That's my name for generative AI when I want to humanize it.
 
You are not using AI for anything. You are helping me demonstrate that the claims on its behalf are overblown:

I'd like to devise a challenge/opportunity for you to demonstrate, Hygro. It would work like this. Lex gives the two of us an assignment, something that he thinks AI won't be able to do well, but is worth somebody's doing well. That is, a good response to the assignment would be a good thing in the world. You go use all your skills with AI to get a stellar finished product. I work without it and do my best. You and I supply our final work to Kaitzilla. Kaitzilla gives Lex both responses. Lex says which he thinks represents the better response to his assignment. It would have to be a word thing, since all I'm good at is words.

An AI production will be generated in the course of the process (Hygro's), but with the ultimate aim being to expose that such productions are vapid, that the team working with AI doesn't 100% beat the team working without it.
 
Last edited:
In the challenge, you don't use AI. You give an assignment to both myself and Hygro and judge the results.
I don’t think fiction is a good use case. On the contrary, a bad use case.

But if we were going to do it I would say one week for a novel. Rules: the ai user can’t write the literal text and can only prompt using whatever tools, the other person can only write and edit themself.
 
Well, no human writer could write a creditable novel in a week. So an AI user could get something better, through a series of prompts to write a series of chapters.

And I can't spare a week of my life for this. I was thinking some kind of thing that could be written in an hour or so. That's why I proposed a "thoughtful discussion forum post."

Doesn't matter. Lex isn't on board anyway.

This is a great site, but people aren't gamesome enough.
 
You're missing the point with this talk. Learning how to write, to articulate an argument or express an emotion, is beneficial to anyone. Your unspoken premise is it's better for everyone but the "rare geniuses" to use AI to write anything, but we'll never find out who the "geniuses" even are if everyone is writing everything with AI all the time. I find the idea of kids who can't write personal reflections without chatGPT extremely depressing, you can call it "too cool" or whatever but it is what it is. And really my prediction of the future is almost the opposite of yours, the people who can't articulate a single thought or feeling without an LLM to help them are gonna be screwed when the bubble bursts and gen AI actually has to make money (and thus, cost money).



No, I don't use generative AI. If I want to write something, I'll just write it.
Do you really think people dependent on something will let it disappear?

Did video games go away when the bubble burst in the 1980s? Did crypto in 2016? Did housing stop being an investment vehicle and unaffordable since 2008?

Did the internet go away in 2001?

This technology is here to stay. You’re stuck on “writing” and I’m trying to disabuse you of it but you can’t help but circle back. I really do mean wake up. It’s already huge, it’s already here, it’s permanent. No one is going back. Over our dead bodies we aren’t going back. 90% of AI companies will fail because it’s going to get too big for them to keep up. The bubble might pop, the tech stays.

is it sad that we are feeble chair dwellers because of technology? Very. You have the time freedom and tools to be Olympian, like our ancestors were by default. The tech is here to stay. It’s going in one direction only.

And it already encompasses so much more than

The economy is tens of trillions of dollars. Baristas are getting $45,000 a year instead of $800 for doing the same work because technology exists in society, the growth is in tech and tech today is in AI. Little metal shapes you take for granted, little changes in your browsers ability to render css you take for granted, a lot of diesel.

People have no idea how big this stuff already is. No idea how deep the infrastructure goes and how quickly it’s built. You don’t see it outside, barista has you tapping a square space little machine and gets a bigger tip thanks to smaller options, but the baristas job is otherwise unchanged. Espresso is espresso. Spend 30 minutes in a coffee shop and sure people are on computers but the more the world changes the more it stays the same right?

But under the hood it’s all different, and we’re way richer than 30 years ago it’s not even funny.

the use cases for LLMs is so great it’s insane. Pure unintegrated genAI is an AI dead end and we reached its pure form height in 2022 with gpt4. But its role inside other applications is incredible and anyone using it to good affect isn’t going to return to the bad old days.

Bubbles don’t end a technology, they precede it. Technology as an economic institution has increasing returns to scale, so this one should be a really big bubble leading to an even bigger maturity leading to the next tech.

But say it popped tomorrow. Big austerity high interest rates and taxes on investors the works. OpenAI dead. Anthopic dead. All dead.

Well get wrecked naysayers we got everything we need open source, every company is going to find a way, every individual like me is going to harness extra computers and build agents with free tools.

I can run deep seek on my 2016 phone sitting around if I have to, I’m not going back. Corporate America is not going back. Government soon is not going back.

Luddites, fuddy duddies, anti tech hipsters, angry old geezers, too cool for school old people who aged out of tech excitement can mix their points up as convenient (not good enough/ it’s a bubble/ people are going to atrophy/ it uses too much energy/ slop) but at the end of the day it’s just getting started and it’s already—been established—revolutionary.
 
Well, no human writer could write a creditable novel in a week. So an AI user could get something better, through a series of prompts to write a series of chapters.

And I can't spare a week of my life for this. I was thinking some kind of thing that could be written in an hour or so. That's why I proposed a "thoughtful discussion forum post."

Doesn't matter. Lex isn't on board anyway.

This is a great site, but people aren't gamesome enough.
Dr Jeckyl was two weeks with a pen, a shorter book can be done in a week on a word processor.

Although more honest to the use case would be: we both write as many novels as we can in one week and see which one in a blind test is the best.

Slop will lose every time so it’s not trivial work for the AI user. But a good user will have a lower ratio of slop to something cool.

But it’s still a bad use case.
 
That's precisely what AI means to humanity, and that's what is so frightening about it.
We're on the way of making ourselves obsolete, and depending on if and how the singularity happens, giving literally unthinkable power in the few hands that own it.

Maybe for now.
How long before AI is actually able to outcompete human even in art ? (most probable answer : not long at all)
Just a little reminder :
AI takes only a handful of years to go from "much worse than human" to "better than human".

I just can't listen to people saying that AI can't replace humans for such or such task, and not hear instead the stories we want to believe about how a modern factory can't produce an item of higher quality than a forgesmith.
It’s a good chart except for image classification, where AI can outperform but also couldn’t tell an avocado from an apple in 2022.
 
You’re stuck on “writing” and I’m trying to disabuse you of it but you can’t help but circle back.
Writing matters, though, because writing is thinking.

Really high level users like yourself, especially ones who had already made themselves smart before the technology arrived, will be able to do things with it.

The random kid, who shirks his writing assignments by using it, and therefore doesn't develop his mind, will not be made better off by it.

My own prediction is not a bubble, but that it has hit its plateau. It will be basically this good and this useful going forward. It will always be useless for innovative thinking--by definition: it recycles what has already been thought. Companies are already saying it's more hassle than its worth because human users have to keep fact-checking it.

If you think it's going to continue on this steep upward slope, tell me when you predict that it will produce a music video that isn't slop, and then flag me when in your estimation it in fact has done so.

Or point me to a piece of writing it has done that you think is good.

It's all promise, promise, promise--man, how good it's going to be. But never "look at this impressive, worthwhile thing it has done."
 
Last edited:
There will always be retrogrades/conservatives protesting the use of new technology.

Printable and moveable books?!? That's one way to sow heresy everywhere, if you were to trust the conservative clerics.
Smallpox vaccine?!? Unnatural and dangerous.
Artificial lighting?!? A recipe to destroy natural sleep, crumbling of morals (night life) and explosions, lots of explosions!
Railroads?!? "The human body can't survive such speeds", "railways will corrupt morals and distrurb cattle".
Telephones?!? Face-to-face will be dead in a decade. We'll all be calling each other up instead of meeting like civilised men.
Computers in offices?!? That's just a highway to mass unemployment.

Conservative worries were sometimes rooted in legitimate fears, but often look irrational in hindsight.

it recycles what has already been thought.

Unless god imbued you with knowledge I will take a wild guess that you incorporated and recycled Aristotle, Hugo, Emerson, Smith and 200,000 pages on this very website. How is that different technologically to what computer program does?

Companies are already saying it's more hassle than its worth because human users have to keep fact-checking it.

The latest models (last 12 months) have in-built fact checking across as deep a spectrum as users wish to specify. What we used to have is 1 intelligent agent doing our task. We now can devise armies of intelligent automated programs simultaneously to fetch, analyse, fact check, organise in neat document stacks, or code into a good visual representation. The impact of this on productivity of high level professionals is yet to be fully understood.

I do agree with you, Gori, that we've reached a kind of plateau. Further development will be specialised, tailored to various fields of application, away from general use.
 
The random kid, who shirks his writing assignments by using it, and therefore doesn't develop his mind, will not be made better off by it.

Writing helps sharpen thought, but it isn’t the only way people think. Using AI doesn’t automatically dull the mind - lazy use does. The real divide isn’t “smart vs. dumb” but whether someone engages critically with the tool or lets it think for them.
 
How is that different technologically to what computer program does?
I can put the substance of my reading into fresh combinations. AI can just predict what word is likely to come next. That isn't thinking. It's patching together little previous instances of thought.

I'll watch for the next time I do it (thinking) in my posting on this site and do a little analysis of how it wasn't just figuring out what word is likely to come next in a sequence.

The real divide isn’t “smart vs. dumb” but whether someone engages critically with the tool or lets it think for them.

And tons of students are going to let it think for them. Or, more precisely, give results drawn from it, to a question their teacher has posed them so that they can exercise their own mind.

I could build some hydraulic contraption to move a barbell. My own muscles wouldn't grow.

Using AI doesn’t automatically dull the mind
In a small number of people, e.g. Hygro, it stimulates the mind.

OMG, Emerson (to respond to another thought). If there was ever any thinker on earth who would have been opposed to AI, it would have been him.
 
Last edited:
I can put the substance of my reading into fresh combinations. AI can just predict what word is likely to come next. That isn't thinking. It's patching together little previous instances of thought.

I'll watch for the next time I do it (thinking) in my posting on this site and do a little analysis of how it wasn't just figuring out what word is likely to come next in a sequence.

You keep saying “just predicting the next word” as if it were a coin flip. That’s like saying chess is “just moving wood around a board.” The surface description hides the machinery.

Underneath, there’s fetching of vast contexts, weighing of probabilities across thousands of possibilities, pruning, error correction, and the ability to hold threads of logic across long spans of text. Trivialising it as mere “prediction” is like calling your own thinking “just sparking neurons.”

And here’s the kicker: high level prediction isn’t a dumb guess. It’s an analytical collapse of countless candidate futures into a single coherent continuation. That’s more like strategy than patchwork.

So before you dismiss it as sewing scraps together, consider: what you call “fresh combination” may itself be a pattern-driven collapse of possibilities. Why should we assume the human version is categorically superior?

And tons of students are going to let it think for them. Or, more precisely, give results drawn from it, to a question their teacher has posed them so that they can exercise their own mind.

I could build some hydraulic contraption to move a barbell. My own muscles wouldn't grow.

Yes, but every generation meets a tool that threatens to shortcut thinking - the printing press, the calculator, the internet. The danger isn’t the tool, it’s whether teachers can shift the challenge so students must go beyond what the tool hands them.
 
Ok, let's do a case study.

After I posted, a recent bit of thinking I'd done here occurred to me. I had figured out, or figured out how to explain, why Charlie Kirk had acted like Ms Rachel cited "love your neighbor" from Leviticus rather than from Matthew.

So I gave Copilot (that's all I have access to) this prompt:

If a right-wing commentator wanted to disparage a woman who makes YouTube videos for toddlers for her citing the Biblical injunction "love your neighbor," why would he act as though she had cited it from Leviticus 19 rather than from Matthew 22?

It gave a good answer (that I'll mostly only quibble with a little bit):

If a right-wing commentator were to disparage a woman who makes YouTube videos for toddlers for citing the Biblical commandment “love your neighbor,” and chose to frame her citation as coming from Leviticus 19 rather than Matthew 22, there could be several rhetorical or ideological reasons behind that choice:

1.​

  • Matthew 22:39 is where Jesus himself cites “love your neighbor as yourself” as the second greatest commandment, following “love the Lord your God.” By attributing the quote to Leviticus 19:18, the commentator could be trying to:
    • Strip the quote of its Christological authority, implying she’s cherry-picking from the Old Testament rather than following Jesus.
    • Suggest she’s being selective or superficial in her theology, especially if she’s not perceived as religious or doctrinally rigorous.

2.​

  • Leviticus is often cited in culture-war debates, especially around sexuality (e.g., Leviticus 18:22). By referencing Leviticus, the commentator might be:
    • Ironically pointing out that progressives often reject Leviticus when it’s inconvenient (e.g., its prohibitions), but embrace it when it supports their message.
    • Trying to expose perceived hypocrisy—“You quote Leviticus when it suits your message of love, but ignore its moral strictures.”

3.​

  • If she’s making videos for toddlers and quoting “love your neighbor,” the commentator might see her as oversimplifying complex moral issues.
    • By citing Leviticus, he could be implying that she’s blind to the full context of Biblical law, which includes harsh judgments and strict codes.
    • It’s a way of saying, “She’s quoting a verse from a book she clearly doesn’t understand.”

4.​

  • Matthew 22 places the commandment in a moral hierarchy defined by Jesus, giving it theological weight.
    • Leviticus, by contrast, is part of the Mosaic Law, which many Christians see as fulfilled or superseded by the New Testament.
    • So citing Leviticus could be a way to frame her moral appeal as outdated, legalistic, or irrelevant.
Then I asked it

If he was motivated by any of reasons, would it represent a good-faith response to her endorsement of the principle that we should love our neighbor?

And it gave a very good answer:

No, if the commentator were motivated by any of those reasons, it would not represent a good-faith response to her endorsement of the principle “love your neighbor.” Here's why:

1.​

  • Deliberately citing Leviticus 19 instead of Matthew 22—when the speaker clearly intends a Christian framing—suggests an effort to undermine rather than engage with her actual message.
  • It’s a form of bad-faith framing, where the goal is not to understand or respond to her ethical appeal, but to discredit it by association with something the audience might view as problematic or inconsistent.

2.​

  • “Love your neighbor” is a moral principle, not a legalistic citation. Whether it appears in Leviticus or Matthew, its invocation in a modern context—especially by someone making content for toddlers—is clearly meant to promote kindness, empathy, and community.
  • A good-faith response would acknowledge the intent behind her words and engage with that. Instead, redirecting attention to Leviticus may be a way to derail the conversation or score ideological points.

3.​

  • If the commentator is using Leviticus to imply hypocrisy or inconsistency, that itself may be selective and opportunistic—especially if he doesn’t apply the same scrutiny to other figures or principles he supports.
  • This kind of selective critique often functions more as a gotcha tactic than a sincere theological or moral engagement.

4.​

  • Targeting someone who makes videos for toddlers for quoting a universally positive moral teaching—especially one endorsed by Jesus—suggests a disproportionate and ideologically driven reaction.
  • It risks coming across as petty or hostile, rather than intellectually or morally serious.
(So AI thinks the same thing I do about Charlie Kirk :D) (In fact, Copilot is even more categorical in his condemnation of Kirk than I am!)

Of course, I'm giving it the results of my own thinking, within the question (a leading question), and it's just validating that my thinking was cogent.

So here's what you'd need to count AI as thinking. You'd need to show it the Kirk video and have it think to itself: "You know, there's something fishy about Kirk citing "love your neighbor" from Leviticus, rather than from Matthew." And if you show AI the Kirk video, it won't think that. It won't do anything until you ask it a question. The absolute biggest thing that is lacking from AI, that distinguishes it from genuine thinking, is initiative, thinking to ask a particular question in the first place.
 
Last edited:
Back
Top Bottom