The AI Thread

Make of that what you will.
 
You make the AI sound so good Hygro, that you are almost superfluous to the creation process.
The more skilled one is, the more effortless one can seem.
 
I had a work colleague tell me her teenage daughter is using AI for psychological advice.

Count me as an old on this one. For some stuff you need a human, not a parrot, no matter how pitch-perfect the parrot can be made to sound.
 
Hygro, I'm glad you're finding that AI tools are helping you with your artistry. I went to your sound cloud site before you were using AI and liked your stuff, so I wish you only the best. If AI tools are going to help you bring it to the form you want, more power to you.
I appreciate you checked me out and like that you liked my stuff. But I think I didn't make something clear even though it was my thesis:
It helps me with my artistry because it helps me make money and free my time and organize my life.
I've never used generative AI to generate my music. For me and my intentions, that's a terrible use case.

But you made a general claim that the team working with AI would beat the team working without it, and I think that just isn't true. Hard to imagine how we might arrange an art-off, but I'd put my money on the team not using AI to deliver something more genuinely artistically powerful.

I take a different view than you do of Red Elk's video. I think AI has hit its plateau. I think this is what it can give us and the best it ever will give us. Uncanny valley in any depictions of human beings and music that sounds derivative, always derivative. (Again, plenty of pre-AI or non-AI art is derivative too, of course. But AI never, and can't, rise above that. It is literally what it does; it goes and derives.)
Except 2 years ago that was said and *most* of the accompanying reasons are now solved. But it does leave us the pure-form generative AI plateau that was obviously by itself a dead-end to be exploited.

But think about your art off for a second. Are you thinking of like, on one side is a great painter painting and another side a great painter painting and has AI? That's trivial, the AI is useless.

But if two sides are going on tour and one side is stuck crunching a spreadsheet before they can head and the other side is organized and homeworked, the second side is already touring, getting feedback, inspiration and iterating, of course the second side's art is going to win. And since when have the forerunners of capitalism not as enjoyed from their people the best art? The artists might be rebels but the structure they rebel against is rich for plunder. Rock and Roll couldn't be created by Madison Avenue, but the mass distribution of electric guitars to give rise to a virtuoso receives their push.

Maybe the specific art form makes a difference. It's poetry for me. And so there's no downstream stuff for me to bother having AI help with.

And to this:



Because it's you. You're going to chase that sliver. You would have done so in a world where AI had never been invented, if there were a sliver to chase in any other way.
Right exactly. Its not there for you to write a better poem, it's not here for me to make music.
 
Ok, by all means use AI tools to free up time for your artistry (because, again, I want to see you thrive, and anything that helps that I'm all for).* But that's not all that RED used AI tools to do to make that video. They did the art part with AI. And that's what got you glowing.

About a video that, as a rock video, is not terribly impressive.

And then it makes your claim essentially equivalent to the claim that a band with a bus can get gigs in a larger radius from their home than a band without a bus. That I wouldn't contest.

*I want you to produce that #1 record (so I can say "I knew him when.")
 
Last edited:
Are we in still in the state of denial or did we reach the luddite threshold?

In 10 years millions of household and office AI robots will flood the market.

That is when we'll stop hearing AI slop and start hearing revolution.
 
I might get a Roomba.
 
doesn't .. entertain you enough?

No, more like it gets facts wrong and can't string together a coherent argument.

Dude please accept that I mean this for your benefit. The anti-AI crowd is operating on pure copium, strengthening confirmation bias looking at the worst possible examples blissfully unaware of the gap between the dip your toes toys and the news that reports them, and their reality.

"I tried to get AI to do some exact thing and it failed" okay. Imagine typing a sentence into yahoo and getting nothing good back in 1999. Like, first, use google, second, use the best 2 or 3 words and then know how to browse. It's the same today, except completely different, and you have to look to those differences.

Copium, newb-ness, skill issues, indentity group reaffirmation, and of course that conspiratorial possibility these tools are being hyped and dismissed based on partisan demos to minimize one side's ability to use to them while selling them on how cool they are for recognizing bad writing like art like that's something new.

I am 6 months behind the curve. And I'm an early adopter. I know this because 6 months ago a tool was released for a usecase I've had for a year and just discovered it a few days ago thinking I needed to wait. I really don't like consuming AI-annoncement media nor do I enjoy staying current on the vocab metagame.

However my work demands that either I bang my head on a wall and procrastinate all day hoping to get into a fiery zone of super focus to get anything done, or I can use AI on its terms of its best practices and I have the best easiest job in the world and my productivity is sky high. Like it seemed wild in 2024 when feeding prompts into chat interfaces with book-length context windows could give you working results, and 2024 is now a joke compared to what we have. I bet you've noticed zero difference.

Nowhere do I have to like slop to know how to use it to make high quality outputs.

You can be a part of it or a hater, but the haters have no moral high ground here. Just curmudgeny aesthetics. Their chorus of disdain is at worst impeding anyone in their tribe... Ine side is about to show up to the war riding horses against another side driving tanks. The battlefied? Trenched up plains. But I can traverse mountains better. Sure, the cities, farms, people, and whole point of the fight is mostly in the plains.

Like think deeper. Is the complaint that "In 2 seconds AI doesn't beat great artists and writers therefore useless"? I mean, yeah, true and thank God. But that's not what this is. This is what 100,000 USD per capita looks like. 10k was cars, and yes riding horses is way more fun than cars, you're in nature and it's a nice pace and can stop and talk.

I'm asking you to consider waking up, and if you're serious, to use the tools to win economically, financially, politically, personally. 30 years ago it was the Internet. We're young enough to know all the grownups who didn't like they Internet weren't actually cooler just because books are better than geocities. There wasn't much on there worth anything to anyone's daily life except email, forums, alt news, and directions until well into the 2000s.

But I read an article that– dawg. The AI journalists are not smarter than you. They are feeding you identity supplication at best. This is the big tech change, and it's up to you to decide if you're a geezer or youthful enough to change.

Waiting for it to be so good it can replace artists and authors and blow your mind, some tiny little fraction of what this is even trying to do in some future, is just waiting for it to beat you, to consume you, and not to empower you at all. But those using it to their benefit, whether work or life, are reducing stress and increasing time and energy for the things that matter.

I did extensive research into generative AI and machine learning more broadly for my job back in late 2023/early 2024. I don't think generative AI is entirely useless, but I do think that basically 95% of the claims being made about its potential are nonsense. You're damn right the AI journalists aren't smarter than me, but the computer scientists are and the cognitive researchers are too. The problem with AI isn't that it can't match great artists, it's that it can't add 2 and 2 and tells you to eat rocks for your health. I think that the widespread adoption of AI is a reflection of an era where the received wisdom is that everything is quantitative and that we can just pretty much ignore the interpretive/qualitative side of reality. It is, ironically, a subspecies of the problem identified by Randall Wray in that wonderful essay: the problem of misapplying the concept of "efficiency" to social processes, or in this case interpretive/qualitative processes like writing, in ways that make no sense. The great example:

Second, a trip to the doctor. I heard on NPR a couple of weeks ago about a study of the typical office visit (which matched quite well my own experience); unfortunately I do not recall the exact statistic but what follows is close. The doctor asks the patient some form of the following: “So, what is wrong?” (or, in my case, my doc always asks “So, what are your concerns?”). The doctor listens for an average of 9 seconds, then intervenes with a prognosis. The amount of time the doctor is willing to listen before intervening has gone down over time, presumably as health insurers have pressured doctors to increase throughput and as they have greatly increased the amount of paperwork required of doctors. In other words, it is in the name of efficiency. The efficiency fairies are at work in the doctor’s office to eliminate all that wasteful time spent in creating a doctor-patient relationship.

Believing that AI is better because it lets you output more writing in the same amount of time is, in my view, a similar fallacy to what Wray describes here.
 
I can't help you if you want to be too cool for this. Those papers are both perfect examples of how incredible the tech is that that's where the critiques lie.

I cannot begin to stress to you how unimportant it is that writing a great book requires a skillset outside of asking chatgpt to generate text for you. It's so deeply not important to focus on that point. But that you drawn to it should be a loud, bright clue that this entire world of tech is so powerful and significant that the falling crumbs of its plate are landing such that it seems like it's trying to be that, too.

If you can't use chatgpt to further your correct knoweldge, it's a skill issue. And easily surmounted especially for the likes of you. If it makes you lazy and atrophy, well, that's like every single one of civilization's savings, sadly, it must be replaced by deliberate movement. But if you can, then you now have more strength and the surplus. Do you get what I mean? My wife no longer washes clothes by hand. But handwashing clothes kept her healthier! Less neck pain too with all that movement. Give me a paper showing me why the clotheswasher is bad and I will tell you, use that time to exercise. It's the risk of all things. Of all engineering solutions, of all efficiencies.

And if "reasoning models" aren't good enough, which again is the core of the critique, is that it's not good enough, then the reasoning models are so promising as to encourage hopeful attempts at being a magic bullet.

I promise you those Apple researchers are themselves bullish on gen AI as a tech. Late 2023 / early 2024 was a decent time. 4o, o1 Preview if you paid for it!, Claude 3.5.... it was getting good, still stuck in chat and response. Still couldn't really search, smaller context windows, and basically no complex tools. MCP wasn't even out yet.

The biggest issue here is like, the uncanny valley of good followed by something sucks, it's real turnoff. But handling the smell, that's a skill issue too. You figure out how to prompt, how to give context, use the right tool for the right job, and also understand how to choose your methods accordingly.

And for understanding this stuff, it's not immediately obvious. I had a see the matrix moment a month ago or so, and I've been paying since the end of 2022. I've been using this stuff daily, extensively, and I understood a series of critiques, all of yours and more, that I felt on a regular basis, and couldn't phase-invert it until a couple of interviews showed me what this is.

I can't explain it to you! I can explain to AI friendly devs easily. I can only tell you it would behoove you to get really good at using this stuff and you won't care that it sucks at being a rare genius and doesn't replace artistic and intellectual struggle because that's going to be in the same category as the value it brings you and others.
 
Hygro, what kind of product can AI produce a good version of? Where its producing that good thing would be impressive to someone skeptical of its powers.

I'd like to devise a challenge/opportunity for you to demonstrate. It would work like this. Lex gives an assignment, something that he thinks AI won't be able to do well, but is worth somebody's doing well. That is, a good response to the assignment would be a good thing in the world. You go use all your skills with AI to get a stellar finished product. I work without it and do my best. You and I supply our final work to Kaitzilla. Kaitzilla gives Lex both responses. Lex says which he thinks represents the better response to his assignment. It would have to be a word thing, since all I'm good at is words.

There's a time limit (since I assume this is one of the things that makes AI superior), but the time frame is such that a human working alone could produce a product within that time-frame.

You and I find a time when we can both spare that pre-designated amount of time. The ticker starts. If you finish early, you submit early, and that counts in your favor; that shows the labor-saving that AI has provided. (But if Lex says its an inferior product, then the time saved doesn't count in your favor.)

Could such a challenge be devised? One that shows off what AI brings to the table.

As a starting model, I'm thinking of Aiken Drumm's letter to a contractor who did substandard work in post 660. Probably something a little more demanding than that.

Edit: Edited to answer @Moriarte's concern below
 
Last edited:
Lex gives a prompt, something that he thinks AI won't be able to do well.

There's fallacy in this: AI is only as smart as the human, which supervises it. Better off thinking about AI not as entity separate to human, but rather brain extension.

If Lex can't give AI a prompt, which resonates well with AI - simply means Lex is not a competent prompt engineer. Doesn't mean that AI is incapable. AI has limitations. Good prompt engineer knows them and he (prompt engineer), amplified by AI, should supply the prompt for your "Turing test". AI in this instance is engineer's cpu and memory banks.

It is the human, which makes products, AI is memory and logic chain amplification.
 
Last edited:
I wasn't using "prompt" in the sense that people use it in connection to AI. So change to "assignment." Hygro will go turn that into an AI prompt (or actually series of prompts). I'll just start doing the assignment.

I've edited the original post to reflect this.
 
I showed my dad the other day how I took a screenshot of my entire monitor, asked ai to quickly clone "it". I wasn't even writing complete English. It surmised my intent and gave me a working landing page for that brand for that industry, but much more basic. I then took a screenshot of the pattern of a photo of my shirt and said "redo it but look like this, both generally and its intracacies" but like barely english. Worse than I type here.

It replied something like "I will take this between pattern of woven fabric [soemthing something] and [redo the website]. It did, it looked great, it was nonliteral but it vibed. But when I showed my Dad in a very Lexian way he was like "I hate this corporate marketing look at this nonsense beautiful flowing fabric" and I'm like. ...

... dawg. DAWG. The bullfeathers salesy description was a perfect description of my shirt pattern and why I was inclined to buy it, because it looks great and has those features.

And more important, it worked as a prompt. Then I fed it a picture of a Liquid Death box art and it did it again, fewer words this time changing the verbiage to match the new image's vibe. Exact? No, but internally consistent and looked good as inspired by the image? Hell yes.

But my dad was stuck the part of the response that what scared/offended him, that someone would describe a shirt as flowing and beautiful, even if accurate, in a way that could be similar to advertising.

All I did was 3 images and a couple poorly constructed words and got 10 days worth of prototyping. It wasn't until I really leaned in like "I don't think you get what I'm showing you here".

But isn't it just using the same general underlying good design for the shirt as the website restyle?" he asked. What? Design principles? Incredible that one could dismiss taking a freeballing image as a prompt to redo an entire landing page as "well of course an AI has good design principles" :lol: It's like, the thing that made the AI incredible was unrecognized and a thing that would make AI incredible at a whole new level that doesn't exist was assumed to exist. Like it was some kind of pro and not a word generator. The word generator that coded the reskin can't even see the image.

People don't really get it until you show them the process or they figure it out. I can't explain it, only that the attention grabbing "it can explain a joke" or "write a poem" headlines from 2023, which was incredible to demonstrate its novelty, has nothing to do with its value.

And like, like a car: ok now there's a car, you know its faster than your horse ok cool. But you're like it'll never replace the horse in jousting! Ok but like, you can still have fun and race around a track or even drive around and play loud music with your friends, just gotta accept it on its terms, not your old terms that the original pitch bridged to meet you at.
 
Hygro, what kind of product can AI produce a good version of? Where its producing that good thing would be impressive to someone skeptical of its powers.

I'd like to devise a challenge/opportunity for you to demonstrate. It would work like this. Lex gives an assignment, something that he thinks AI won't be able to do well, but is worth somebody's doing well. That is, a good response to the assignment would be a good thing in the world. You go use all your skills with AI to get a stellar finished product. I work without it and do my best. You and I supply our final work to Kaitzilla. Kaitzilla gives Lex both responses. Lex says which he thinks represents the better response to his assignment. It would have to be a word thing, since all I'm good at is words.

There's a time limit (since I assume this is one of the things that makes AI superior), but the time frame is such that a human working alone could produce a product within that time-frame.

You and I find a time when we can both spare that pre-designated amount of time. The ticker starts. If you finish early, you submit early, and that counts in your favor; that shows the labor-saving that AI has provided. (But if Lex says its an inferior product, then the time saved doesn't count in your favor.)

Could such a challenge be devised? One that shows off what AI brings to the table.

As a starting model, I'm thinking of Aiken Drumm's letter to a contractor who did substandard work in post 660. Probably something a little more demanding than that.

Edit: Edited to answer @Moriarte's concern below
I think Lex is very good at focusing on things AI geniunely won't do well. And as a recognition, is not without merit. My plea to all those who agree with me politically is that they figure out what AI does well and use it.

I would say the space that Lex thinks AI is very bad that I think is very good when used right is in the service of objective knowledge work. But what kind of project would that be?
 
Ok. I don't need done any of the things you have it do for you. I don't have a landing page. I don't even know what a landing page is.

Your washing machine saves the nudist no labor.

I would say the space that Lex thinks AI is very bad that I think is very good when used right is in the service of objective knowledge work.

I concede that it does that well. This summer, I felt faint. So I Googled what should one do if one feels faint? Google's AI gave me the answer.

I thought to myself, "Sure, there are a million websites out there where, pre-AI, people posted instructions on what to do if you feel faint. So drawing on all of those, of course it can go use its "what's the predictable next word" algorithm to generate a little write-up." But old Google would have had Mayo's website on the first page; I'd have clicked it and gotten the same info.

At work I missed a meeting. The people there made a bad decision. If I'd been at the meeting, I'd have talked them out of the decision. So I sent around an e-mail. In writing it, I had to be persuasive to six different people, all with different personalities and viewpoints. I can't ask AI, "and make sure you appeal to X who's an a--hole and Y who's still holding a grudge against all of us, and Z who's soft-hearted." Chat's no help for a task like that.

I hope Lex could concoct something a little harder than the fainting thing, a little more along the lines of the workplace e-mail thing.

Dunno: I've just never seen an AI generated thing that has impressed me.
 
Last edited:
As far as I can tell applying machine learning specifically to coding in a given language would be good and useful, as applying machine learning to various other limited domains has been for decades.

The idea that 90% of human jobs or all jobs except Venture Capitalist are going to be replaced by LLMs, though, is just bananas.
 
You on board for your part in this challenge, Lex?

Once we can figure out a kind of writing task at which AI might shine. But which a human could conceivably do. Gori vs the Machine.
 
great book

You're missing the point with this talk. Learning how to write, to articulate an argument or express an emotion, is beneficial to anyone. Your unspoken premise is it's better for everyone but the "rare geniuses" to use AI to write anything, but we'll never find out who the "geniuses" even are if everyone is writing everything with AI all the time. I find the idea of kids who can't write personal reflections without chatGPT extremely depressing, you can call it "too cool" or whatever but it is what it is. And really my prediction of the future is almost the opposite of yours, the people who can't articulate a single thought or feeling without an LLM to help them are gonna be screwed when the bubble bursts and gen AI actually has to make money (and thus, cost money).

You on board for your part in this challenge, Lex?

No, I don't use generative AI. If I want to write something, I'll just write it.
 
Back
Top Bottom