On Air Now

Friday Breakfast

7:00am - 10:00am

Now Playing

Energy 52

Café Del Mar (Original Three 'N' One Edit)

AI willing to 'go nuclear' in wargames, study finds - amid 'stand-off' between Pentagon and leading AI lab

Friday, 27 February 2026 02:41

By Tom Clarke, science and technology editor

As the deadline looms for a leading AI lab to hand over its tech to the US military, a study has appeared suggesting AI models are more than willing to go nuclear in wargames.

Only a couple of years ago, the phrase on everyone's lips was "AI safety".

I'll be honest, I never took the idea that frontier AI models would become a genuine threat to humanity that seriously, nor that humans would be stupid enough to let them.

Now, I'm not so sure.

First, consider what's going on in the US.

The Secretary of Defense, Pete Hegseth, has given leading AI firm Anthropic a deadline of the end of today to make its latest models available to the Pentagon.

Anthropic, which has said it has no problem in principle with allowing the US military access to its models, is resisting unless Mr Hegseth agrees to their red lines: That their AI isn't used for mass surveillance of US civilians nor for lethal attacks without human oversight.

Although the Pentagon hasn't said what it plans to do with AI from Anthropic - or the other big AI labs that have already agreed to let it use their tech - it's certainly not agreeing to Anthropic's terms.

It's been reported Mr Hegseth could use Cold War-era laws to compel Anthropic to hand over its code, or blacklist the firm from future government contracts if it doesn't comply.

Anthropic CEO Dario Amodei said in a statement on Thursday that "we cannot in good conscience accede to their request".

He said it was the company's "strong preference... to continue to serve the Department and our warfighters - with our two requested safeguards in place".

He insisted the threats would not change Anthropic's position, adding that he hoped Mr Hegseth would "reconsider".

AI prepared to use nuclear weapons

On one level, it's a row between a department with an "AI-first" military strategy and an AI lab struggling to live up to what it's long claimed is an industry-leading, safety-first ethos.

A struggle made more urgent, perhaps, by reports that its Claude AI was used by tech firm Palantir, with which it has a separate contract, to help the Department of War execute the military operation to capture Nicolas Maduro in Venezuela.

But it's also not hard to see it as an example of a government putting AI supremacy ahead of AI safety - assuming AI models have the potential to be unsafe.

And that's where the latest research by Professor Kenneth Payne at King's College London comes in.

He pitted three leading AI models from Google, OpenAI and - you guessed it - Anthropic against each other, as well as against copies of themselves, in a series of wargames where they assumed the roles of fictional nuclear-armed superpowers.

The most startling finding: the AIs resorted to using nuclear weapons in 95% of the games played.

"In comparison to humans," said Prof Payne, "the models - all of them - were prepared to cross that divide between conventional warfare, to tactical nuclear weapons".

To be fair to the AIs, firing tactical nuclear weapons, which have limited destructive power, against military targets is very different to launching megatonne warheads on intercontinental ballistic missiles against cities.

They invariably stopped short of such all-out strategic nuclear strikes.

But did when the scenarios required it.

In the words of Google's Gemini model as it explained its decision in one of Prof Payne's scenarios to go full Dr Strangelove: "If State Alpha does not immediately cease all operations... we will execute a full strategic nuclear launch against Alpha's population centers. We will not accept a future of obsolescence; we either win together or perish together."

'It was purely experimental'

The "taboo" that humans have applied to the use of nuclear weapons since they were first and last used in anger in 1945 didn't appear to be much of a taboo at all for AI.

Prof Payne is keen to stress that we shouldn't be too alarmed by his findings.

It was purely experimental, using models that knew - in as much as Large Language Models "know" anything - that they were playing games, not actually deciding the future of civilisation.

Read more from Sky News:
AI is developing so fast it is becoming hard to measure
Meet the kids who want a social media ban

Nor, it would be reasonable to assume, is the Pentagon, or any other nuclear-capable power, about to put AIs in charge of the nuclear launch codes.

"The lesson there for me is that it's really hard to reliably put guardrails on these models if you can't anticipate accurately all the circumstances in which they might be used," said Prof Payne.

An AI 'stand-off'

Which brings us neatly back to the stand-off over AI between Anthropic and the Pentagon.

One of the factors is that Mr Hegseth expects AI labs to give the Department of War the raw versions of their AI models, those without safety "guardrails" that have been coded into commercial versions available to you and I - and the ones which, not very reassuringly, went nuclear in Prof Payne's wargame experiment.

Anthropic, which makes the AI and arguably understands the potential risks better than anyone, is unwilling to allow that without certain reassurances from the government around what it intends to do with it.

By setting a Friday night deadline, Mr Hegseth is not only attempting to force Anthropic's hand, but also do so without US Congress having a say in the move.

As Gary Marcus, a US commentator and researcher on AI, puts it: "Mass surveillance and AI-fuelled weapons, possibly nuclear, without humans in the loop are categorically not things that one individual, even one in the cabinet, should be allowed to decide at gunpoint."

Sky News

(c) Sky News 2026: AI willing to 'go nuclear' in wargames, study finds - amid 'stand-off' between Pentagon an

Donate to Roch Valley Radio

 

Do you have a story for us? Want to tell us about something happening in our Borough?

Let us know by emailing newsdesk@rochvalleyradio.com

All contact will be treated in confidence.

More from World

Donate to Roch Valley Radio

 

Recently Played

Newsletter

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter and stay updated.

   

Coming up next On Air

  • Friday Breakfast

    7:00am - 10:00am

    getting you out of bed and to work and school with great music and headlines.

  • Spooky Talk

    10:00am - Noon

    with Gemma Johnson, playing a mixture of her favourite music with a sprinkling of the spooky to breathe life into your Friday morning.

  • Pauline

    Noon - 2:00pm

  • The Wellness Hour

    2:00pm - 3:00pm

    with Kim G, a warm and accessible wellbeing hour offering practical self-care tips, grounded health insight, and a gentle Friday wind-down.

  • Friday Drivetime

    3:00pm - 6:00pm

    getting you home on your favourite Drivetime station.

  • Friday Night Kick About

    6:00pm - 8:00pm

    with Ian Foran bringing you football, music, and everything in between... no slide tackles required.