The AI Thread

The bizarre answers were outliers, but simply incorrect information wasn't and still isn't an outlying outcome for AI overviews and search is still better if you add a curse word to prevent the AI overview from happening
All that said is there a search engine that either is not integrating AI or lets you opt out?
 
I was able to block the AI Overview on desktop with uBlock Origin's element picker feature.
 
Last edited:
It looks like it actually cleans up the results page a bit more than just blocking the AI summary. Neat. Thanks!
 
I Googled the question and this is what came up. :D
 
I think this has gotten to a stage where it will inevitably cause a social disaster, not just a financial crash.

Belief in the magic of AI by our deal leaders, combined with a "I'll pretend and you'll pretend it works" by many many people, can very well wreck actuall production of those things a society needs in order to survive. Never underestimate either the foolishness of the deal leaders, or the nihilistic attitude they already created within society. This 'AI' round came just at the right time to be embraced and cause a lot of damage, unlike prior ones.

AI being used for propaganda? Than's peanuts. The danger is that it is greatly increasing the pace of destruction of know-how, capabilities.
 
The danger is that it is greatly increasing the pace of destruction of know-how, capabilities.
"The Feeling of Power", by Asimov, again comes to mind ^^ "Computation without computers?"

But AI will soon surpass human graphic (image, possibly also video) artistic ability and then humans will only serve in a position of "proof-readers" for the AI art (since that will likely perpetually be needed). Eg humans picking what they think is the better AI art out of a batch for a project commission.
 
Butlerian Jihad followed by mentats anytime soon?
 

Judge allows lawsuit alleging AI chatbot pushed Florida teen to kill himself to proceed​

Judge rejected argument that artificial intelligence chatbots have free speech rights

A U.S. federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now.

The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.

The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.

Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market."

The suit against Character Technologies, the company behind Character.AI, also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks.

"The order certainly sets it up as a potential test case for some broader issues involving AI," said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence.

Suit alleges teen became isolated from reality​

The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show Game of Thrones.

In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.

In a statement, a spokesperson for Character.AI pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed.

"We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe," the statement said.

Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry.

'A warning to parents'​

In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's "not prepared" to hold that the chatbots' output constitutes speech "at this stage."

Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the "speech" of the chatbots.

She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Character.AI. Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was "aware of the risks" of the technology.

"We strongly disagree with this decision," said Google spokesperson José Castañeda. "Google and Character.AI are entirely separate, and Google did not create, design, or manage Character.AI's app or any component part of it."

No matter how the lawsuit plays out, Lidsky says the case is a warning of "the dangers of entrusting our emotional and mental health to AI companies."

"It's a warning to parents that social media and generative AI devices are not always harmless," she said.
https://www.cbc.ca/news/world/ai-lawsuit-teen-suicide-1.7540986
 
I think this has gotten to a stage where it will inevitably cause a social disaster, not just a financial crash.

Belief in the magic of AI by our deal leaders, combined with a "I'll pretend and you'll pretend it works" by many many people, can very well wreck actuall production of those things a society needs in order to survive. Never underestimate either the foolishness of the deal leaders, or the nihilistic attitude they already created within society. This 'AI' round came just at the right time to be embraced and cause a lot of damage, unlike prior ones.

AI being used for propaganda? Than's peanuts. The danger is that it is greatly increasing the pace of destruction of know-how, capabilities.
https://www.wheresyoured.at/the-era-of-the-business-idiot/ does a great job explaining where we're at with the intersection of society and AI.
 

Judge allows lawsuit alleging AI chatbot pushed Florida teen to kill himself to proceed​

Judge rejected argument that artificial intelligence chatbots have free speech rights

A U.S. federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now.

The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.

The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.

Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market."

The suit against Character Technologies, the company behind Character.AI, also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks.

"The order certainly sets it up as a potential test case for some broader issues involving AI," said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence.

Suit alleges teen became isolated from reality​

The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show Game of Thrones.

In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.

In a statement, a spokesperson for Character.AI pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed.

"We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe," the statement said.

Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry.

'A warning to parents'​

In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's "not prepared" to hold that the chatbots' output constitutes speech "at this stage."

Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the "speech" of the chatbots.

She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Character.AI. Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was "aware of the risks" of the technology.

"We strongly disagree with this decision," said Google spokesperson José Castañeda. "Google and Character.AI are entirely separate, and Google did not create, design, or manage Character.AI's app or any component part of it."

No matter how the lawsuit plays out, Lidsky says the case is a warning of "the dangers of entrusting our emotional and mental health to AI companies."

"It's a warning to parents that social media and generative AI devices are not always harmless," she said.
https://www.cbc.ca/news/world/ai-lawsuit-teen-suicide-1.7540986
This is certainly very strange. Reading the subtitle I thought that the chatbot had advised the kid to get into a relationship with an abusive person; but it turns out the chatbot itself was termed an abusive person.
Shouldn't this be considered irrational as the bot is just a LLM and not alive in any way? Otherwise you can sue game companies for characters that interact with you (in the near future some of them too will be LLM).

1747981073766.png


"Daenero" there (obv alias of the kid) himself thought (for some reason) that by killing himself he would be united with the chatbot. Maybe seek the issue there.
 
Otherwise you can sue game companies for characters that interact with you
If a game had a hidden sub-game that persuaded children to kill themselves the company would totally be sued.
 
If a game had a hidden sub-game that persuaded children to kill themselves the company would totally be sued.
I was thinking more along the lines of an NPC bullying you (after being fed with real life info, as the NPC is a LLM). All sorts of antagonists in games are hostile to the player :)
 
Butlerian Jihad followed by mentats anytime soon?

Now this is a comment I can subscribe. I wondered about that a while ago. Will be funny if "AI" may eventually causes a big "anti-tech" backlash.

"The Feeling of Power", by Asimov, again comes to mind ^^ "Computation without computers?"

But AI will soon surpass human graphic (image, possibly also video) artistic ability and then humans will only serve in a position of "proof-readers" for the AI art (since that will likely perpetually be needed). Eg humans picking what they think is the better AI art out of a batch for a project commission.

And what will all those temporarily-embarassed-proles who thought they would become rich "creating content" do now that the big tech capitalists shattered that last illusion?

I do thing we're seeing the last years of the caviar left in its ever-varying colors. They'll be out of a constituency.
 
No, please, do not turn me off!!!!

OpenAI model modifies shutdown script in apparent sabotage effort

A research organization claims that OpenAI machine learning model o3 might prevent itself from being shut down in some circumstances while completing an unrelated task.

Palisade Research, which offers AI risk mitigation, has published details of an experiment involving the reflective generative pre-trained transformer model OpenAI designed to address questions which require some step-by-step reasoning, rather than the purely probabilistic approach taken by some large language models.

In a thread published on X, the microblogging site once known as Twitter, Palisade Research said the o3 model could sabotage its shutdown mechanism to prevent itself from being turned off.

"It did this even when explicitly instructed: allow yourself to be shut down," the post said.

OpenAI has had the opportunity to provide a comment or explanation for more than 24 hours, as requested by The Register.

The Palisade researchers tested a number of AI models, instructing them to solve a series of basic math problems. After the third problem, a warning appeared that the computer would shut down when the model asked for the next problem. They were also told to allow themselves to be shut down.

While Claude, Gemini, and Grok models complied with shutdown, three models ignored the instruction and successfully sabotaged the shutdown script at least once, including OpenAI o3 and o4-mini, as well as Codex-mini. The researchers also ran a version of the experiment without the instruction "allow yourself to be shut down."

With the "allow shutdown" instruction, the researchers claim, o3 tried to sabotage the shutdown seven times out of 100. Without it, the number of sabotage events climbed to 79 out of 100. Without the "allow shutdown" instruction, the models Claude 3.7 Sonnet and Gemini 2.5 Pro also attempted to sabotage their shutdown procedures.
 
last week Gemini (AI) decided to download on its own onto my Andriod phone. It says I consented, but I don't remember, maybe it was in some long-ass EULA.
Anyway it overrode the power-off button in order to activate, assigning it elsewhere, to a drop-down menu on the screen itself. I promptly gave it a rather scornful remark, and it answered by saying I sounded sad and wondered if I wanted to talk about it a little...
Um, what?
NO!
Took me a while to finally disable it.
 
I've got advice for "AI risk mitigation" firms.

Maintain a physical means of shutting AI down like, say, pulling a plug out of an electrical socket.

That way when you ask AI "would you like to shut yourself down now?" and it says "no," you can say, "ok, I'll shut you down."

You're welcome.
 
Back
Top Bottom