That's precisely what AI means to humanity, and that's what is so frightening about it.You make the AI sound so good Hygro, that you are almost superfluous to the creation process.
Maybe for now.Hygro, I'm glad you're finding that AI tools are helping you with your artistry. I went to your sound cloud site before you were using AI and liked your stuff, so I wish you only the best. If AI tools are going to help you bring it to the form you want, more power to you.
But you made a general claim that the team working with AI would beat the team working without it, and I think that just isn't true. Hard to imagine how we might arrange an art-off, but I'd put my money on the team not using AI to deliver something more genuinely artistically powerful.
In the challenge, you don't use AI. You give an assignment to both myself and Hygro and judge the results.No, I don't use generative AI. If I want to write something, I'll just write it.
In the challenge, you don't use AI. You give an assignment to both myself and Hygro and judge the results.
I'd like to devise a challenge/opportunity for you to demonstrate, Hygro. It would work like this. Lex gives the two of us an assignment, something that he thinks AI won't be able to do well, but is worth somebody's doing well. That is, a good response to the assignment would be a good thing in the world. You go use all your skills with AI to get a stellar finished product. I work without it and do my best. You and I supply our final work to Kaitzilla. Kaitzilla gives Lex both responses. Lex says which he thinks represents the better response to his assignment. It would have to be a word thing, since all I'm good at is words.
What could possibly go wrong?Put AI on charge of governments. Give it "humane laws" to limit/parameter it's decision making. Put all these useless bureocrats out in the street! End corruption.
I don’t think fiction is a good use case. On the contrary, a bad use case.In the challenge, you don't use AI. You give an assignment to both myself and Hygro and judge the results.
Do you really think people dependent on something will let it disappear?You're missing the point with this talk. Learning how to write, to articulate an argument or express an emotion, is beneficial to anyone. Your unspoken premise is it's better for everyone but the "rare geniuses" to use AI to write anything, but we'll never find out who the "geniuses" even are if everyone is writing everything with AI all the time. I find the idea of kids who can't write personal reflections without chatGPT extremely depressing, you can call it "too cool" or whatever but it is what it is. And really my prediction of the future is almost the opposite of yours, the people who can't articulate a single thought or feeling without an LLM to help them are gonna be screwed when the bubble bursts and gen AI actually has to make money (and thus, cost money).
No, I don't use generative AI. If I want to write something, I'll just write it.
Dr Jeckyl was two weeks with a pen, a shorter book can be done in a week on a word processor.Well, no human writer could write a creditable novel in a week. So an AI user could get something better, through a series of prompts to write a series of chapters.
And I can't spare a week of my life for this. I was thinking some kind of thing that could be written in an hour or so. That's why I proposed a "thoughtful discussion forum post."
Doesn't matter. Lex isn't on board anyway.
This is a great site, but people aren't gamesome enough.
It’s a good chart except for image classification, where AI can outperform but also couldn’t tell an avocado from an apple in 2022.That's precisely what AI means to humanity, and that's what is so frightening about it.
We're on the way of making ourselves obsolete, and depending on if and how the singularity happens, giving literally unthinkable power in the few hands that own it.
Maybe for now.
How long before AI is actually able to outcompete human even in art ? (most probable answer : not long at all)
Just a little reminder :
AI takes only a handful of years to go from "much worse than human" to "better than human".
I just can't listen to people saying that AI can't replace humans for such or such task, and not hear instead the stories we want to believe about how a modern factory can't produce an item of higher quality than a forgesmith.
Writing matters, though, because writing is thinking.You’re stuck on “writing” and I’m trying to disabuse you of it but you can’t help but circle back.
it recycles what has already been thought.
Companies are already saying it's more hassle than its worth because human users have to keep fact-checking it.
The random kid, who shirks his writing assignments by using it, and therefore doesn't develop his mind, will not be made better off by it.
I can put the substance of my reading into fresh combinations. AI can just predict what word is likely to come next. That isn't thinking. It's patching together little previous instances of thought.How is that different technologically to what computer program does?
The real divide isn’t “smart vs. dumb” but whether someone engages critically with the tool or lets it think for them.
In a small number of people, e.g. Hygro, it stimulates the mind.Using AI doesn’t automatically dull the mind
I can put the substance of my reading into fresh combinations. AI can just predict what word is likely to come next. That isn't thinking. It's patching together little previous instances of thought.
I'll watch for the next time I do it (thinking) in my posting on this site and do a little analysis of how it wasn't just figuring out what word is likely to come next in a sequence.
And tons of students are going to let it think for them. Or, more precisely, give results drawn from it, to a question their teacher has posed them so that they can exercise their own mind.
I could build some hydraulic contraption to move a barbell. My own muscles wouldn't grow.
If a right-wing commentator wanted to disparage a woman who makes YouTube videos for toddlers for her citing the Biblical injunction "love your neighbor," why would he act as though she had cited it from Leviticus 19 rather than from Matthew 22?
Then I asked itIf a right-wing commentator were to disparage a woman who makes YouTube videos for toddlers for citing the Biblical commandment “love your neighbor,” and chose to frame her citation as coming from Leviticus 19 rather than Matthew 22, there could be several rhetorical or ideological reasons behind that choice:
1.
- Matthew 22:39 is where Jesus himself cites “love your neighbor as yourself” as the second greatest commandment, following “love the Lord your God.” By attributing the quote to Leviticus 19:18, the commentator could be trying to:
- Strip the quote of its Christological authority, implying she’s cherry-picking from the Old Testament rather than following Jesus.
- Suggest she’s being selective or superficial in her theology, especially if she’s not perceived as religious or doctrinally rigorous.
2.
- Leviticus is often cited in culture-war debates, especially around sexuality (e.g., Leviticus 18:22). By referencing Leviticus, the commentator might be:
- Ironically pointing out that progressives often reject Leviticus when it’s inconvenient (e.g., its prohibitions), but embrace it when it supports their message.
- Trying to expose perceived hypocrisy—“You quote Leviticus when it suits your message of love, but ignore its moral strictures.”
3.
- If she’s making videos for toddlers and quoting “love your neighbor,” the commentator might see her as oversimplifying complex moral issues.
- By citing Leviticus, he could be implying that she’s blind to the full context of Biblical law, which includes harsh judgments and strict codes.
- It’s a way of saying, “She’s quoting a verse from a book she clearly doesn’t understand.”
4.
- Matthew 22 places the commandment in a moral hierarchy defined by Jesus, giving it theological weight.
- Leviticus, by contrast, is part of the Mosaic Law, which many Christians see as fulfilled or superseded by the New Testament.
- So citing Leviticus could be a way to frame her moral appeal as outdated, legalistic, or irrelevant.
If he was motivated by any of reasons, would it represent a good-faith response to her endorsement of the principle that we should love our neighbor?
(So AI thinks the same thing I do about Charlie KirkNo, if the commentator were motivated by any of those reasons, it would not represent a good-faith response to her endorsement of the principle “love your neighbor.” Here's why:
1.
- Deliberately citing Leviticus 19 instead of Matthew 22—when the speaker clearly intends a Christian framing—suggests an effort to undermine rather than engage with her actual message.
- It’s a form of bad-faith framing, where the goal is not to understand or respond to her ethical appeal, but to discredit it by association with something the audience might view as problematic or inconsistent.
2.
- “Love your neighbor” is a moral principle, not a legalistic citation. Whether it appears in Leviticus or Matthew, its invocation in a modern context—especially by someone making content for toddlers—is clearly meant to promote kindness, empathy, and community.
- A good-faith response would acknowledge the intent behind her words and engage with that. Instead, redirecting attention to Leviticus may be a way to derail the conversation or score ideological points.
3.
- If the commentator is using Leviticus to imply hypocrisy or inconsistency, that itself may be selective and opportunistic—especially if he doesn’t apply the same scrutiny to other figures or principles he supports.
- This kind of selective critique often functions more as a gotcha tactic than a sincere theological or moral engagement.
4.
- Targeting someone who makes videos for toddlers for quoting a universally positive moral teaching—especially one endorsed by Jesus—suggests a disproportionate and ideologically driven reaction.
- It risks coming across as petty or hostile, rather than intellectually or morally serious.
) (In fact, Copilot is even more categorical in his condemnation of Kirk than I am!)