The Politics of AI Adoption and Use

Gori the Grey

The Poster
Joined
Jan 5, 2009
Messages
14,234
We have a thread for AI generally, and for Thunderfall's AI storybook maker.

But recently in both of those threads, has arisen what I think is a distinct issue, deserving of its own thread.

That concerns the politics of employing AI. @Hygro has advanced the (for me) challenging proposition that to be a committed leftist in the age of generative AI requires learning to use the tools at a high level and then using them to advance progressive causes--not least as a counter to those forces on the right that will almost undoubtedly be doing so for their partisan purposes.

Entangled with this question are aesthetic, ethical and legal questions, and it's fine to address those as well (and, I think, inevitable that we will do so), but I'm most centrally interested in hashing through @Hygro's position (and some posters' resistance to it): that being committed to leftist causes demands adopting this new technology.
 
Last edited:
Some background.

First, here's (what I find to be) @Hygro's most focused articulation of his position:

I think it's very important we get some kind of handle on it politically. In the 2x2 matrix of outcomes, 2 of them are war and one of them is a meaningful further loss of liberty to our new overlords, who are already winning now. To move the outcome to the good one (AI does cool things for us in a democratic and liberating way), or, bad case, have to win wars, we're going to need to be good at this stuff.

And this is also a sharp formulation:

But this tech is here to stay, and to dominate. So if you want it to have a chance of maximizing its good side and minimizing its bad side you have to learn it.

He believes this so strongly as to further believe (his most provocative claim, I think):

Else you have surrendered it to the “other side” and then it’s clear you never meaningfully wanted the better world, the imperfect one to fight for, just the position to have said I told you so in a sinking ship.

His governing metaphor is this:

"Oligarchs own the gunpowder factories!" as an argument for why "the left" should stick to swords and not adopt this new gun thing. Killing should be bespoke!

Which he expands this way:

Ultimately this: right now Uncle Oligarch Samuel Colt is passing out free guns, ammo, and time at the range. We are all already good at the sword. And There are infinite reasons one could articulate why the sword is superior. Get a group of you and you will reinforce each other. You are getting the guns, ammo, and training for free or cheap. Subsidized on his dime, he's losing the money. You are the winner in this equation, here and now. One team is going "yes all of this". The other team is split between those who have genuinely tested these rifles out, and yeah, the next fight is with these, not swords. That split side, the gun users, are begging you, learn the firearms, go forward with an intuition. There's no sticking it to the other side by snubbing Colt's gun giveaway. The fight is won by those who wield the best tech en masse.
I myself have had a number of objections. The simplest one is just this


I cannot think of a single thing I even want to ask it to do.
That is to say, I'm not entirely closed to Hygro's argument, but, in such tinkering as I have done with generative AI, I haven't been able to get it to do anything I don't think I could do better myself.

I have other objections:

You flatly acknowledge that it produces mediocre results, then cast it as "dominating." Dominating what? The production of mediocre products? Why should anyone want to jump on to that ship? And how is jumping on that ship supposed to benefit the more humane values that the left espouses? . . . It lends itself best to misinformation. There's an AI generated image going around just of late of Trump using a walker. Am I supposed to celebrate because the left is using these tools to those ends? If not, then to what ends? Point me to some sympathy-expanding thing these tools in the right hands have produced. What the hell am I supposed to be doing with these things in order to usher in a better world?

There are other objections
I'm not exactly a billionaire and I have a day job, but I've got some stuff published, my own research and work, which these things will regurgitate unaccredited and uncited and often incorrectly or without context, if you ask the right questions. It sucks man! If people are going to use my stuff I want them to know it was me in case they're interested in more of it! No doubt this feeling is magnified for the many artists, authors, musicians who try to make a living off their work.

Plagiarism of big famous things is more obvious but for every Tweety Bird or Thomas the Tank Engine presented as original work there's bound to be many many other instances going unnoticed of directly ripping off the work of less prominent creators.
I share this concern and would put it my own way by remarking on the irony of being called to use creatively a tool that constitutionally shows no respect to creators. This is why I said ethical concerns would enter into the matter.

There are other posters skeptical of Hygro's claim:

Does it come down to being willing to use AI to create memes to fight fascism's AI-generated memes?
I second aelf's question. For what are we left-leaning creatives mastering these tools? Meme vs meme warfare?

Well, that should be enough to give a sense of the discussion so far. I haven't intentionally left out any substantive post, but if I have unintentionally done so, don't hesitate to say as much.

Discuss.
 
Last edited:
You should certainly seek to learn how a tool and a weapon can be used, but I don't see the value in solely understanding the best way to turn the slop handle. Or to create working practices that are completely dependent on access to the slop output that someone else owns and controls completely.

Thats just being a chump and falling for the trick.

So I guess my answer is it depends on if a genuinely useful genAI will (or even if it can) be owned by actual humans or non-evil nations.
 
You should certainly seek to learn how a tool and a weapon can be used, but I don't see the value in solely understanding the best way to turn the slop handle. Or to create working practices that are completely dependent on access to the slop output that someone else owns and controls completely.

Thats just being a chump and falling for the trick.

So I guess my answer is it depends on if a genuinely useful genAI will (or even if it can) be owned by actual humans or non-evil nations.
The default outcome is they win. Whether you or I play or not, they default to winning. So it’s not a fair game. That would be capitalism and while no one’s done a good job opposing it completely, progressivism has had on the balance the track record to give our fight to.

Currently the tech is subsidized right now and the open source models are always only just a step or three behind. So you are effectively being paid to get good and could switch later at any point to your own or collective alternative. Yes windows beat Linux, it’s not looking great.

But non participation isn’t a way out. Not politically.

Central to my thesis is that using this stuff takes skill. So like, get good, and don’t make slop. Like yes a slop avalanche wins elections, our current admin proves that without AI, just force of character, but you don’t need to aim it that way. Just having a good automated system to keep up with the volume and carve yourself the time and the space to actualize.

In economics all savings must spent or they are lost. And if you ever spend anywhere distinctly tech you will see two things.

One they struggle for colonization.
Two life is way healthier lower tech.

It not naturally to fill your time savings with better exercise or superior movements, that takes cognitive load, its own scarce resource. However, this very two edged sword (to switch weapon metaphors) is one of the first tools that can meaningfully free cognitive load. It will then be an arms race between, but if you participate maybe it won’t domination, and with enough of us, a victory.
 
That's the thing, because there's hype around it right now, defenders lean on argumentum ad populum rather than explain how regurgitating imitations of existing media is actually useful.
Because what you’re saying is a strawman that ignores everything about it being about being tranformative infrastructure. Don’t get distracted by the very loud sideshow that is media creation.

Like you won’t “get it” if you keep repeating the most obtuse and useless side of it is what this is as if that’s what we’re talking about.

Try using cursor.
 
Because what you’re saying is a strawman that ignores everything about it being about being tranformative infrastructure. Don’t get distracted by the very loud sideshow that is media creation.

Like you won’t “get it” if you keep repeating the most obtuse and useless side of it is what this is as if that’s what we’re talking about.
This conversation literally started with you coming into TF's book thread.
Try using cursor.
I'll take it from the folks who already have used Enterprise genAI:
But if a corporate AI revolution is underway, it’s not showing up in the data. Up to this point, enterprise AI has been incredibly unprofitable, with a whopping 95 percent of US companies that took up AI reporting that the software has failed to generate any sort of new revenue.
 
That's the thing, because there's hype around it right now, defenders lean on argumentum ad populum rather than explain how regurgitating imitations of existing media is actually useful.
AI tools are the real deal for helping software developers write code.

Pictures and other things are sort of hype for the moment?


The military likes AI a lot, and they are the practical sort.
Too much data and the fastest one lives are some very real pressures.


Like scanning 1000 local news stories in their native language, and having an English Language summary of them all in 60 seconds.

Or maybe piecing together an enemy naval situation at blazing speed.

Sometimes the military must lie convincingly to win a battle, or keep fighting morale up.
AI can help in some grey areas.


Some people think Nick Fuentes is a big deal even though many of his followers are bots.
 
That's the thing, because there's hype around it right now, defenders lean on argumentum ad populum rather than explain how regurgitating imitations of existing media is actually useful.
So, when Hygro posted this post


over in the AI thread proper, I wanted to work out a systematic reply, but it was too late at night and my brain wouldn't do the work. That's part of why I guess I want this thread: so that I can work out my response to him piecemeal (and then maybe assemble it into a systematic position).

So I'll start with this post. My stance on AI generally is that it will never do anything truly innovative, simply because of how it operates. It--costitutionally--draws on things that have already been written, "regurgitating imitations of existing media," as you put it. And winning in politics constantly does require innovation. You have to meet some arising situation with messaging that will steer the populace toward your side of an issue. Trump is bombing small boats that he claims are carrying drugs, then bombing the survivors of that bombing. Democrats need some message that will guide people in the center to think "that's not how I want my nation to act." How much is AI going to be able to do to contribute to that messaging? Maybe a lot, I don't know. After I post this, I'll go ask Copilot (the only one I have access to). And Hygro can use his mastery on the better ones. But if it can't, that's because this is a new thing, so nothing in its data-set addresses it. (And other reasons; it doesn't really know human beings and how their sympathies are directed). How will it compare to what Dems did settle on (humans thinking, as far as I know): that video encouraging soldiers to resist illegal orders (with its implication that these are illegal orders)? I don't know.

Anyway, I'd love to see a case where masterful prompting generated something cleverer than any human could/had so far come up with.
 
Last edited:
Scott Jenson (former Apple designer way back in the day when they lapped the tech world in it, among other jobs) recently did a fantastic presentation on UX design at Ubuntu Summit - I would recommend it to anyone with any interest in UX, computer and phone use, humane and open source technology, etc., and no prior knowledge is necessary. But he did do a brief interlude on AI where he talked about how he thinks the current AI situation is a bubble but that it can be replaced by something better - small model AI designed more for specific questions or jobs, and that is entirely run locally and is ethically trained. I think we should be pushing for stuff like that. We need to get AI - like all things - out of the hands of the uberwealthy. They need to be cooperative, ethical, and non-hierarchical. They should be owned collectively. They should all be open source and interoperable.

On a fundamental level one of my biggest problems with modern LLMs is they just genuinely kind of suck. Strip all the politics and ethics and they still can’t do my job, they make bad art, and they frequently hallucinate.

Here is the section where Scott talks about AI:


Anyways, I try to always ask myself if I have become the old person I resented when I was young and who blamed video games and internet for everything bad in the 90s and 2000s. And I just remember; who is running AI right now? Wealthy people. Who should we all resent? Wealthy people. It would be the same for any new tech. Wealthy people make terrible video games often too.
 
I have to mention (just to get it out of the way) that I have one personal block on adopting these things and that has to do with 1) the time investment to master one and 2) the speed with which they replace one another. Already many years ago, I devoted considerable man-hours to mastering one particular computer program (for typing Greek characters) only to have it go defunct, and me to effectively have lost all of that time. That made me very wary of investing time into mastering any particular computer program. AI now seems like that on steroids. @Thorgalaeg sometimes lists a whole string of programs that he applied, one after another, to get a particular result. That's fine if you've made these things your hobby, as he seems to have done. But I'm not enough interested in anything they can produce to want to invest that kind of time into mastering them.

This isn't a substantive response to Hygro's thesis, but it is a personal experience/disposition that bears on how receptive I am to it.
 
No, you're fine. Don't know what you mean by "any actual creative work" because that could slice at any angle. There's a saying in the music world, pre-AI but the more things change:
...
"I thought using loops was cheating, so I programmed my own using samples. I then thought using samples was cheating, so I recorded real drums. I then thought that programming it was cheating, so I learned to play drums for real. I then thought using bought drums was cheating, so I learned to make my own. I then thought using premade skins was cheating, so I killed a goat and skinned it. I then thought that that was cheating too, so I grew my own goat from a baby goat. I also think that is cheating, but I’m not sure where to go from here. I haven’t made any music lately, what with the goat farming and all."
I have pretty definite angle for what constitutes actual creative work: Creative work that is meant to be enjoyed, and not at least partly created to serve a utilitarian function. So stories as opposed to the design of an app. There's much about the latter that is not the product of creative work.

And you did say that you believe AI generates mediocre stories. I agree.

Yes, asking the antifa side to lose for some aesthetic purity is enabling and lets call it "profa". The smugness is yours. Some moral purity in not dirtying your mind through the application of learning how to get good at this stuff "ew I would never touch that, and anyone who does is an untouchable" well done.

@choxorn and @danjuno you two are far too young to surrender to technologic curmudgeonry.

There's hatred for every new tech, its ugliness, change of life, its imperfections, it still wins. Technology, mass produced and distributed, always wins. The best hope we have is to make it ours, and gloriously. The processes, the culture, ideally ownership. The fight goes in order, can't own it if you don't use it, and we can't fight to own it if half the coalition wants to destroy it or ignore it.

Don't be fooled by the binary between liking to consume AI art or not, that's not where this tech lies. Yes it can do final media, and it will get better, but.... This tech lies in the automation of tedious, describable mechanistic processes. But because it works in the knowledge space, that's true of non-engineering, in the sciences, the social sciences, collecting/parsing/presenting information, research, definitely in organization and time management — it is a godsend to those of us medicated but don't only want to chew prescription amphetamine to live a decent life. If you want to undo capitalism, please. But I'm on the chopping block too. Anyone who has seriously used this stuff isn't going back. Corporate America, from the worker up, has spoken.

Ultimately this: right now Uncle Oligarch Samuel Colt is passing out free guns, ammo, and time at the range. We are all already good at the sword. And There are infinite reasons one could articulate why the sword is superior. Get a group of you and you will reinforce each other. You are getting the guns, ammo, and training for free or cheap. Subsidized on his dime, he's losing the money. You are the winner in this equation, here and now. One team is going "yes all of this". The other team is split between those who have genuinely tested these rifles out, and yeah, the next fight is with these, not swords. That split side, the gun users, are begging you, learn the firearms, go forward with an intuition. There's no sticking it to the other side by snubbing Colt's gun giveaway. The fight is won by those who wield the best tech en masse. That's it. No way around it, only through. This is our world now.
Sorry, but I need something more concrete from this narrative. What does it mean to use AI as guns and ammo? Like I asked, is it a matter of using AI to generate counter-memes?
 
Central to my thesis is that using this stuff takes skill.

Which is a remarkable dodgy looking foundation to be building anything else on. Companies making AI systems are competing for more users, and so are quite deliberately pushing them to be usable by every idiot out there. The last thing they want is for AIs to take any skill to use.

Whenever I hear the idea of AI prompt writing as some kind of skill or profession, my memory goes back to the dim and distant days of the mid nineties, and my school's first shiny new IT teacher assuring us that using internet search engines was going to be some kind of high demand profession in the future. There is major pressure to ensure using any AI is never more skillful or demanding than a google search.
 
Ok, I'll use this for my next non-systematic observation.

I think prompt-writing is a skill, but that it's not a skill in its own right. Rather, people tend to be able to write better prompts if both 1) they have themselves mastered the field in which they are making a prompt and 2) they develop a skill for feeding AI programs the way they want to be fed. I already said this in my response to the quantum mathematics guy who got it to suggest to him a worthwhile formula. (I'll go dig it up after posting). That's because he knows that field well enough to phrase prompts in a way that can generate valuable results (and rule out crappy ones). Second half of this post:


The one single time I ever wanted to actually use AI for something was for posters for a Shakespeare themed party. I wanted that image that is my avatar (less the fool's cap) on a party balloon (making use of S's bald head). I wasted 2 or 3 hours trying to refine my prompt and never got anything usable. In the months since, I have thought to myself, "I wonder if I should have said "proportion the image of S's head to the shape of a balloon." or "feel free to alter the dimensions of the Shakespeare part of the image." You can maybe see what I have in my mind's eye. His forehead would be wider, his chin narrower. That little bit of a balloon below where it is tied would be his ruff collar. Anyway, I wondered whether, if I were actually already skilled in graphic art, I could have used terminology that would have got it to produce good results. But I'm not, so it didn't.

The principle I've derived from this is that in order to get good results out of an AI program, one has to be fully capable of completing the kind of task in question entirely on one's own (and have spent some time experimenting with the AI in question). I worry about this from the educational perspective, because young kids are taking the view that these things can do their thinking for them. This generation can (sometimes) get (somewhat) good results out of AI, but that's because they already know how to Do The Thing; will the next generation ever bother to develop those skills.

The (2) above does take me back to my recent point, though, about being hesitant to invest the time needed in these things. When, over in the AI thread, I was talking about one element of human thinking (that AI doesn't duplicate) being, essentially, stream of consciousness, @Hygro doodled with what kind of results he could get. And it was cool; he set various values for "heat" and stuff. But in doing so, he realized that (don't quote me, but just take the principle) Chat 4 was good at producing that kind of result, but those capabilities had been taken out for Chat 5. That would annoy me to no end. To master one of these things and then have the new version remove a capacity.
 
Last edited:
Which is a remarkable dodgy looking foundation to be building anything else on. Companies making AI systems are competing for more users, and so are quite deliberately pushing them to be usable by every idiot out there. The last thing they want is for AIs to take any skill to use.

Whenever I hear the idea of AI prompt writing as some kind of skill or profession, my memory goes back to the dim and distant days of the mid nineties, and my school's first shiny new IT teacher assuring us that using internet search engines was going to be some kind of high demand profession in the future. There is major pressure to ensure using any AI is never more skillful or demanding than a google search.
Fortunately the tech sounds like it will get better once it slips out of the giga-corps' hands. And hopefully far less political than what it is being made out to be in this topic.
 
I'm bringing this over to the thread that I custom built for us to thrash out these matters:

Ultimately this: right now Uncle Oligarch Samuel Colt is passing out free guns, ammo, and time at the range. We are all already good at the sword. And There are infinite reasons one could articulate why the sword is superior. Get a group of you and you will reinforce each other. You are getting the guns, ammo, and training for free or cheap. Subsidized on his dime, he's losing the money. You are the winner in this equation, here and now. One team is going "yes all of this". The other team is split between those who have genuinely tested these rifles out, and yeah, the next fight is with these, not swords. That split side, the gun users, are begging you, learn the firearms, go forward with an intuition. There's no sticking it to the other side by snubbing Colt's gun giveaway. The fight is won by those who wield the best tech en masse. That's it. No way around it, only through. This is our world now.

Unfortunately I have finally figured out what this reminds me of:

‘He drew himself up then and began to declaim, as if he were making a speech long rehearsed. “The Elder Days are gone. The Middle Days are passing. The Younger Days are beginning. The time of the Elves is over, but our time is at hand: the world of Men, which we must rule. But we must have power, power to order all things as we will, for that good which only the Wise can see.
‘ “And listen, Gandalf, my old friend and helper!” he said, coming near and speaking now in a softer voice. “I said we, for we it may be, if you will join with me. A new Power is rising. Against it the old allies and policies will not avail us at all. There is no hope left in Elves or dying Númenor. This then is one choice before you, before us. We may join with that Power. It would be wise, Gandalf. There is hope that way. Its victory is at hand; and there will be rich reward for those that aided it. As the Power grows, its proved friends will also grow; and the Wise, such as you and I, may with patience come at last to direct its courses, to control it. We can bide our time, we can keep our thoughts in our hearts, deploring maybe evils done by the way, but approving the high and ultimate purpose: Knowledge, Rule, Order; all the things that we have so far striven in vain to accomplish, hindered rather than helped by our weak or idle friends. There need not be, there would not be, any real change in our designs, only in our means.”


Gen AI could be likened the Ring, an artifact of great power - but all that is done with it turns to evil. Really I think the better metaphor is the Palantír and Denethor, though. Gen AI manifestly does not have the world-shaking power of the One Ring. The Palantíri, on the other hand, can (or could) be useful, but the Palantír that we actually see in the story is controlled by the will of Sauron and shows only what he wants it to show - Denethor of course believes he has outwitted Sauron and grows increasingly contemptuous of others on his "side" who lack the knowledge gained from the Palantír.

He finally ends up trying to do a murder-suicide with his son, which kinda hits home when I read stories like this:

https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis
And let me use this as the occasion for my first head-on response to @Hygro's position.

Since you mostly don't lionize the productions of AI (except for its utility in coding), it has felt to me as though your position is essentially this. In the future, everybody is going to use these tools for everything they do. So it behooves well-meaning people on the left to get good at using them, just so that, in general they will succeed in the new world. And with that success, they can now make various contributions to progressive causes. You once likened it to how many more gigs a band with a bus could take than could a band without a bus. That is, it doesn't make them better musicians; it just helps them in innumerable practical ways, and therefore more people will hear their music.

That seems a kind of indirect contribution that you see these tools making. The most that has come up so far as a direct contribution is that one could use it for counter-memeing. But that doesn't seem to warrant your degree of zeal: we better all get on board or turn in our liberal cards.

Anyway, I just wanted a post where I separated out indirect from direct contributions from AI, because it seems to me you're usually focused on the indirect ones: this thing is here to stay; it's just how things are going to be done in future, etc.
 
Last edited:
That’s a good word.

On the subject itself, I don’t really see a political angle that presents this kind of near existential threat.

It just seems to me like a developing technology, and I think the way we consider technological advances as revolutionary is kind of us imposing these things after the fact: I don’t think anyone in 1965 would see where we are in 1995 and be able to pinpoint a specific development that led to a truly revolutionary change, nor could someone in 1995 see what we would be doing today. They came in and we adapted them more gradually than I think we thought by we did.

Please excuse the convoluted wording.
 
Back
Top Bottom