The AI Thread

Stop and think for a second how much money can be made satisfying the demand for creation of 50% of daily worldwide content.
This is the question. At the moment the answer is significantly less than it costs for the electricity to generate that content. As long as the content it produces is mostly "slop" that most people spend effort, electricity and perhaps even money filtering out that may well not change. Most of the content it is competing against was either produced for free by people like us here, or for page impressions/ad clicks that are becoming worthless as humans reading the web are so massively outnumbered by bots that it is not worth the electricity to serve much of it.
 
Yeah, I'm with Samson: "content" on the internet is not necessarily representative of "demand." Even before Chat and the others, there was plenty of content on the web for which there was no demand; any damn fool can post any damn thing. Plus, they're likely counting as content each little paragraph that Google's AI produces when you do a search. But they've created the illusion of "demand" by just including it as part of the search engine; they actually make it damnably hard to shut off. Furthermore, the more low to medium content is out there (slop), the more people will crave high-level content, the more slop will be seen as actually a problem to be surmounted than a measure of anything anyone is demanding.

So, to use your analogy, it would be as if to order a Burger King meal at all, Burger King included with the meal some stale bread. Most customers throw that away as they walk out of the store, because they didn't want it in the first place and its unpalatable, but Burger King still tracks and touts the "demand" for its stale bread. So Burger King is in a mad race to create more stale bread factories than McDonalds. The winner in that war is the first one to stop wasting money on stale bread factories.

Now a war of Shakespeare vs Shakespeare . . . man, if I only had comic-booking skills.

(It's actually part of my theory about the guy that after the death of Marlowe, he made himself his own rival, e.g. "can I write a comedy [Midsummer Night's Dream] and a tragedy [RnJ] on the same base story?")
 
Last edited:
Gemini, Google’s AI chatbot, accused me of a crime
It was disconcerting, to say the least, to find myself identified as a criminal defendant at the top of a recent internet search.

I was struggling to recall the name of a defendant in a vandalism case I had written about, so I “googled” it. The name turned out to be more familiar than I would have guessed.

Here is what Google’s chatbot, Gemini, told me and the rest of the internet: “In June 2025, Algernon D’Ammassa was identified as the Las Cruces City Hall window smasher, with surveillance video showing him breaking 11 windows at 3:30 a.m. on June 14. He was arrested for vandalism, which caused significant damage to the building.”

Following up, I asked Gemini whether I had been convicted of this crime. It reported back that the D’Ammassa caper “ended with a determination that he was incompetent to stand trial and he was released.”
Welcome to Google’s “Gemini era,” with the incorporation of generative AI into seemingly every Google product including its famous search engine.

The days of entering a search string into Google and getting a list of satisfactory results are gone. Google search now features an “AI overview” summarizing search results to spare us the grueling labor of clicking on links and reading articles for ourselves. It has also incorporated an “AI mode” allowing you to chat with the digital parrot.


ajax-request.php

Algernon D’Ammassa

Despite advance testing, glitches with AI-driven searches drew attention soon after Gemini’s launch in 2024. It was caught presenting information drawn from satirical articles and social media posts as fact, as when it suggested using glue to keep cheese from slipping off pizza. It also reported that dogs played professionally in American sports and that 13 U.S. presidents had earned degrees from the University of Wisconsin-Madison: The actual number is zero. A computer science professor was able to influence Gemini’s summary about him to include multiple Pulitzer Prizes for non-existent books.

Less amusingly, Gemini has misidentified poisonous mushrooms and urged a Michigan college student to die after calling him “a stain on the universe.”

Google, Meta, Microsoft and OpenAI have all faced defamation lawsuits over their AI models fabricating criminal histories and putting them in search results for the world to see. These imitation-intelligence apps (as I still call them) do not read or think, much less act out of malice, for all their storied computational power.

Gemini was obviously grabbing my byline and confusing me with the defendant, in an admirable bit of irony for a crime reporter. The overview included a link to my story, but there’s no telling how many people doing a quick search are going to click on the article and compare it to the chatbot’s overview.

That’s one of the contentious issues with this technology: It diverts traffic from published articles by scraping their content and summarizing it, sometimes haphazardly.

The only way I could find to report the issue and hopefully clear my name was to click a “thumbsdown” icon and protest my innocence in a reply box.

“The vast majority of AI Overviews are factual and we’ve continued to make improvements to both the helpfulness and quality of responses,” a spokesperson for Google told me. “When issues arise — like if our features misinterpret web content or miss some context — we use those examples to improve our systems, and may take action under our policies.”

Google says Gemini is getting better at distinguishing smart aleck content from real information, doing math and avoiding inaccuracies or “hallucinations” in its responses. That sounds good, but there is still a sense that the consumer is testing a powerful and dangerous product, a product that struggles as much as human beings to make sense of an internet that is lousy with AI slop and spam.

Gemini cleared my name a day after I reported my problem, with subsequent results for the same search terms reporting the real name of the defendant and identifying me as a reporter who wrote about the case.

My name seems to have emerged unscathed from this brush with infamy. Yet the experience illustrates the value of reading articles rather than relying on a chatbot’s deduction game, even if Google has shoved the primary sources to the side in favor of its “AI Overview.”

Algernon D’Ammassa is the Albuquerque Journal’s Southern New Mexico correspondent. He can be reached at adammassa@abqjournal.com.
 
When you add AI to our modern internet, the future of both seems really f'ed up.
 
Deepseek is hidden behind the great firewall, as such it can only do a fraction of tasks that modern western GPT's can do. Also, FREE doesn't mean unlimited. You're hitting a context/user limit pretty fast and just wait on a bench for hours - China is limited by compute just as badly as USA.

It's actually not that limited or really censored as much as you think it is (as long as you don't ask about Tiannamen).

In fact I've run into far, and I mean FAR worse censorship in American AI. Like it's deliberately stoping me or misdirecting me from answering my questions, even simple ones. Especially when it comes to economics, politics, and mathematics.

The Chinese one clearly is trained off of stuff outside the firewall, they just limit it from saying bad things about China specifically. But it has no problem talking about anything else or about anyone else.

In other words American AI comes off very HR/corporate propagandy while Deepseek comes off as more nation state propagandy but only about it's native government. Now you would think that's equally bad, but I find the corporate propaganda somehow worse at times for basic stuff, like it doesn't want me to learn specific things that pertain to figuring out certain power structures among the elite. Like the corporate AI is essentially trying to misdirect and make everyone financially illiterate and dumbs everything down just like America's stupid corporate infected education system.

In other words I feel like I'm back in public school (our failing public school system) with the Yank AI.

Also Id say there's really not that much of a wait anymore like when Deepseek first blew up. Remember the media then running the usual anti China story afterwards? Yeah that probably scared the Yanks away and made the traffic pretty manageable. Westerners falling for yellow peril as always.
 
Yeah, I'm with Samson: "content" on the internet is not necessarily representative of "demand."

I won't argue - at this point there likely isn't a 1:1 correlation. Current state of monetisation is less important than the "exponential" rate at which AI-generated caught up with human-generated content. In my own field of finance I've observed how employment of AI by various large periodicals rapidly, in the span of several years, elevated the quality of data points produced daily by the investment community. For a long time the vast majority of articles on the subject of international finance were below Gen-AI, frankly pitiful, quality. Now that AI skilfully aggregates daily data, the transparency of financial markets has improved. Law - is the other domain where low-medium quality synthetic thinking is ample in aiding so many lay people in generating professional contracts/analysing incoming contracts in their daily endeavours.

So yeah, I agree that content does not fully represent demand, but also, monetisation is not zero. I guess we'll have to wait for cleaner representations of correlation, when it emerges. I've pulled that chart from the latest video on YouTube by Computerpile, who argues against AI slop (I believe). He is very knowledgeable on various subjects of computer science.

The Chinese one clearly is trained off of stuff outside the firewall, they just limit it from saying bad things about China specifically. But it has no problem talking about anything else or about anyone else.

Yeah, but it has the problem of integrating with western services on the fly - something we do easily here, in the West - is insurmountable to an integrated software system which operates from within the moat. The more intertwined western AI becomes with western services, the more difficult it will be for Chinese AI to catch up. In the end, we'll have two completely separate AI ecosystems, each protecting itself from international competition by integrating only local services. This dynamic currently helps more the American side, because the relevant software stack is generally more developed in the west.

As for less Yank, I've found that Grok is more open minded and far less restrictive than it's older brother ChatGPT. It's ironic and it shows, that one was grown in California, while the other - in Texas. My own path was Chat-GPT-Deepseek-(local models run on GPU)-ChatGPT-Grok. I still maintain that ChatGPT5 is slightly more intelligent than any of others, but Grok have become more competitive to me personally due to price/ unit of intellect, so to speak. :)
 
Yeah, but it has the problem of integrating with western services on the fly - something we do easily here, in the West - is insurmountable to an integrated software system which operates from within the moat. The more intertwined western AI becomes with western services, the more difficult it will be for Chinese AI to catch up. In the end, we'll have two completely separate AI ecosystems, each protecting itself from international competition by integrating only local services. This dynamic currently helps more the American side, because the relevant software stack is generally more developed in the west.

You don't think they have some kind of bypass when training new models? If they are govt subsidized by their state I'm sure the central politburo has their own official way to bypass the firewall for specific industries that are entrusted to do so for research/scientific advancement and correspondence. The firewall only pertains for civvies (and those within govt not so trusted) and even you should know that.

Many civvies though regardless know how to bypass it illegally themselves without issue, and you think TikTok isn't some kind of data siphon to the West they can use for developing their AI?

As for less Yank, I've found that Grok is more open minded and far less restrictive than it's older brother ChatGPT. It's ironic and it shows, that one was grown in California, while the other - in Texas. My own path was Chat-GPT-Deepseek-(local models run on GPU)-ChatGPT-Grok. I still maintain that ChatGPT5 is slightly more intelligent than any of others, but Grok have become more competitive to me personally due to price/ unit of intellect, so to speak. :)

Grok won't let you criticize Elon. Let that sink in.

Also whatever silver linings Grok currently has, you don't think that maybe this won't last forever? Considering Elon's declining mental state (Ketamine) it will probably soon devolve into a trainwreck. He likes to micromanage does he not?
 
Law - is the other domain where low-medium quality synthetic thinking is ample in aiding so many lay people in generating professional contracts/analysing incoming contracts in their daily endeavours.
I do feel that summarising contracts should be something an AI can do, because it is the meeting of minds the contract represents that is the truth, rather than some external reality. If the document is unclear enough that a misunderstanding is computationally provable one should be able to argue that it should be interpreted in favour of the party that did not draft it. Yet another reason you should run your own models, so you can record the start of the model and the seed that created the interpretation you rely on in court.
 
Back
Top Bottom