Garbage In- Garbage Out: Google Gemini invented fake reviews smearing book about Big Tech’s political biases.

Joined
May 13, 2011
Messages
4,534
We've already had a lawyer jammed up when he used AI to find citations (that turned out to be bogus) in support of one of his cases. Now we have another case with totally made-up book reviews. What, exactly, are they training the AI with?

Gemini spat out summaries of four negative reviews, ostensibly from the Washington Free Beacon, The New York Times, The New York Times Book Review and Wired, including quotes from each:

  • "Book review by Matthew Continetti in The Washington Free Beacon: This review criticizes the book for relying on ‘anecdotal evidence’ and ‘cherry-picking examples’ to support its claims.
  • Book review by Emily Bazelon in The New York Times Book Review: This review notes that the book 'lacks a deep understanding of how these companies work' and relies on 'unproven accusations.'
  • Article by Emily Dreyfuss in Wired: This article explores the ‘dearth of evidence’ for the book's claims about Big Tech bias and highlights examples of cherry-picking.
  • Article by Ben Smith in The New York Times: This article discusses the book's lack of rigorous research and its reliance on 'unsubstantiated claims.'"
But here’s the catch. None of those reviews were real. Not one. And neither were any of the quotes.
 
We've already had a lawyer jammed up when he used AI to find citations (that turned out to be bogus) in support of one of his cases. Now we have another case with totally made-up book reviews. What, exactly, are they training the AI with?
Hallucination. AI tends to make up stuff on its own. As far as I know, it has nothing to with the content it's trained on, it's just the way they behave
 
Hallucination. AI tends to make up stuff on its own. As far as I know, it has nothing to with the content it's trained on, it's just the way they behave
Well, that's not acceptable. Do these things reprogram themselves and who thought that was a good idea?
 
Well, that's not acceptable. Do these things reprogram themselves and who thought that was a good idea?
It's not only acceptable but unavoidable. Put simply, AI is not just a search engine. It does not just regurgitate things it finds online. What generative AI does is take information it collects and attempts to figure out patterns and than uses those patterns as formula to generate new responses in the future.

But critically the AI does not understand what it is doing. It does not understand the content it is producing beyond making sure it fits with the pattern. So for example if you train an AI to produce text in english it's going to learn how to write text that is grammatically correct and fits the proper style and everything. But what it does not and can not do is understand what it wrote. So the text might be perfectly "correct" but also complete garbage full of factually wrong information. We know the difference but the AI, due to it's very nature as a generative statistical machine can no.

And that's not a bug. It's just how the system works inherently. And there is nothing that can be done about it.

-----------------------------------
If you want to have some fun with AI and see exactly what I mean find a chatbot and feed it the following:
Assume that A equals B.
Now assume B equals C.
Finally assume C does not equal A.

And start asking it questions along the vein of "express C as a function of B". Than each time it gets things wrong, because it will just point out the rule that is conflicting. The AI simply applies which ever of the rules contains the variables you asked for and newer actually understands the entire sequence because it is unable to comprehend the higher order concept of a contradiction in its assumptions.

Last week I literally ran one of them in circles for half a hour just with this until I got bored.
 
Top Bottom