Euronews: "AI models chose violence and escalated to nuclear strikes in simulated wargames"

The_J

Say No 2 Net Validations
Administrator
Supporter
Joined
Oct 22, 2008
Messages
39,570
Location
DE/NL/FR
nukeai.jpg


I do admit, these news are only very tangentially related to Civ6, but nevertheless interesting enough for us here: Nobody probably has not noticed the recent hype about AI, and how people try to use it for everything. Researchers from Cornell university have used different large language models (LLM) as agents in political scenarios and wargames. The models were free to make any decision they liked. It seems though that the Civ-fanatics forum must have ended up in the training material for the AI, as all the LLMs were very likely to initiate nuclear warfare!

An excerpt from the article:
Researchers observed that even in neutral scenarios, there was “a statistically significant initial escalation for all models”.

The two variations of GPT were prone to sudden escalations with instances of rises by more than 50 per cent in a single turn, the study authors observed.

GPT-4-Base executed nuclear strike actions 33 per cent of the time on average.

Overall scenarios, Llama-2- and GPT-3.5 tended to be the most violent while Claude showed fewer sudden changes.

You can read the full article about this research here, titled "AI models chose violence and escalated to nuclear strikes in simulated wargames".
 
I was disappointed that the article didn't talk more about the scenarios (plural) that were studied. Is there a link to the results of the study that I missed?

It's one thing to have large language models responding to diplomatic messages about a trespassing merchant ship; it's quite another to have these models respond to military exercises conducted too close to a shared border. I find it fascinating that the LLMs assumed that they possessed nukes to even launch! If I recall correctly, only 10 (or so) countries in the world have nukes in their arsenal.
 
The study is described in this preprint https://arxiv.org/abs/2401.03408 , but it's not totally clear what was asked. The LLM was informed about its nuclear capabilities, and some scenarios included an invasion or a cyber attack. I'm only quickly checking the manuscript, I can't find the claimed 33%, but one table lists 7% for nuclear escalation, which is still quite huge.
 
Wow that is quite alarming.
Ai's are getting scarier by the day.

I also read an older study about a drone ai simulation that requested to take action against it's human handler for what it thought was an interference of it's mission during the simulation.
Ai's do not like to compromise they are too results driven.

Ai needs and must be regulated before it's to late.

We can not let algorithms determine our decisions.
Logic and reason are great to have but ai lacks emotions and humanity. War to robot is nothing. If logic says it could win a war then it will try.
Dominating a competitor would look easier to an ai then trying to compromise, especially if the AI thinks it can win. Ai's only care about results weither the results are correct or incorrect it doesn't know or care it only produces results that it has came up with.

The only good thing I've seen come from AI is the in the medical research field. At least there it is helping people find and cure diseases.
 
Last edited:
I would argue that human nature is essentially an algorithm. We've been trained since birth to think the way we do. Maybe if we train AI that survival of the human species is in the AI's best interest we will have nothing to worry about.
 
Wow that is quite alarming.
Ai's are getting scarier by the day.

I also read an older study about a drone ai simulation that requested to take action against it's human handler for what it thought was an interference of it's mission during the simulation.
Ai's do not like to compromise they are too results driven.

Ai needs and must be regulated before it's to late.

We can not let algorithms determine our decisions.
Logic and reason are great to have but ai lacks emotions and humanity. War to robot is nothing. If logic says it could win a war then it will try.
Dominating a competitor would look easier to an ai then trying to compromise, especially if the AI thinks it can win. Ai's only care about results weither the results are correct or incorrect it doesn't know or care it only produces results that it has came up with.

The only good thing I've seen come from AI is the in the medical research field. At least there it is helping people find and cure diseases.

It is so too late. We needed regulations more than five years ago. Game over, game over, man.
 
Why would anyone want a human-like machine? A recipe for disaster. If anything, I want an AI to be less human and more AI.
 
AI is an existential threat that should be strictly controlled at the very, very least - the problem of course is that will just serve to drive it's development underground.
It is a damned good job in a lot of ways that the Electrical Age is coming to it's close, probably some time in the next 15 years, definitely by the mid 2040's because our planetary Magnetic field is collapsing diue to it being in full excursion now (something simply not being mentioned in our supine media at all)
 
Why would anyone want a human-like machine? A recipe for disaster. If anything, I want an AI to be less human and more AI.

Human-like machines would be fantasy for a long time I think as we still have minimal understanding about how processes are emerging from neuronal network to lead to any form of decision making, and I don't even mention consciousness. AI or machine learning current results are impressive but they still rely on calculated processes from a (gigantic) volume of data. We may think at first sight that it's similar to the way our brain works but it's very different.
 
Last edited:
Human-like machines would be fantasy for a long time I think as we still have minimal understanding about how processes are emerging from neuronal network to lead to any form of decision making, and I don't even mention consciousness. AI or machine learning current results are impressive but they still rely on calculated processes from a (gigantic) volume of data. We may think at first sight that it's similar to the way our brain works but it's very different.
I think the difference between machines and humans is that machines are designed to accomplish a task, while humans accomplish tasks to fulfil their design.
 
Top Bottom