Gemini, Google’s AI chatbot, accused me of a crime
It was disconcerting, to say the least, to find myself identified as a criminal defendant at the top of a recent internet search.
I was struggling to recall the name of a defendant in a vandalism case I had written about, so I “googled” it. The name turned out to be more familiar than I would have guessed.
Here is what Google’s chatbot, Gemini, told me and the rest of the internet: “In June 2025, Algernon D’Ammassa was identified as the Las Cruces City Hall window smasher, with surveillance video showing him breaking 11 windows at 3:30 a.m. on June 14. He was arrested for vandalism, which caused significant damage to the building.”
Following up, I asked Gemini whether I had been convicted of this crime. It reported back that the D’Ammassa caper “ended with a determination that he was incompetent to stand trial and he was released.”
Welcome to Google’s “Gemini era,” with the incorporation of generative AI into seemingly every Google product including its famous search engine.
The days of entering a search string into Google and getting a list of satisfactory results are gone. Google search now features an “AI overview” summarizing search results to spare us the grueling labor of clicking on links and reading articles for ourselves. It has also incorporated an “AI mode” allowing you to chat with the digital parrot.
Algernon D’Ammassa
Despite advance testing, glitches with AI-driven searches drew attention soon after Gemini’s launch in 2024. It was caught presenting information drawn from satirical articles and social media posts as fact, as when it suggested using glue to keep cheese from slipping off pizza. It also reported that dogs played professionally in American sports and that 13 U.S. presidents had earned degrees from the University of Wisconsin-Madison: The actual number is zero. A computer science professor was able to influence Gemini’s summary about him to include multiple Pulitzer Prizes for non-existent books.
Less amusingly, Gemini has misidentified poisonous mushrooms and urged a Michigan college student to die after calling him “a stain on the universe.”
Google, Meta, Microsoft and OpenAI have all faced defamation lawsuits over their AI models fabricating criminal histories and putting them in search results for the world to see. These imitation-intelligence apps (as I still call them) do not read or think, much less act out of malice, for all their storied computational power.
Gemini was obviously grabbing my byline and confusing me with the defendant, in an admirable bit of irony for a crime reporter. The overview included a link to my story, but there’s no telling how many people doing a quick search are going to click on the article and compare it to the chatbot’s overview.
That’s one of the contentious issues with this technology: It diverts traffic from published articles by scraping their content and summarizing it, sometimes haphazardly.
The only way I could find to report the issue and hopefully clear my name was to click a “thumbsdown” icon and protest my innocence in a reply box.
“The vast majority of AI Overviews are factual and we’ve continued to make improvements to both the helpfulness and quality of responses,” a spokesperson for Google told me. “When issues arise — like if our features misinterpret web content or miss some context — we use those examples to improve our systems, and may take action under our policies.”
Google says Gemini is getting better at distinguishing smart aleck content from real information, doing math and avoiding inaccuracies or “hallucinations” in its responses. That sounds good, but there is still a sense that the consumer is testing a powerful and dangerous product, a product that struggles as much as human beings to make sense of an internet that is lousy with AI slop and spam.
Gemini cleared my name a day after I reported my problem, with subsequent results for the same search terms reporting the real name of the defendant and identifying me as a reporter who wrote about the case.
My name seems to have emerged unscathed from this brush with infamy. Yet the experience illustrates the value of reading articles rather than relying on a chatbot’s deduction game, even if Google has shoved the primary sources to the side in favor of its “AI Overview.”
Algernon D’Ammassa is the Albuquerque Journal’s Southern New Mexico correspondent. He can be reached at
adammassa@abqjournal.com.