On Air Now

Wide Awake Club

2:00am - 5:00am

Now Playing

Kid Kapichi

Get Down

AI could be giving US lethal edge in Iran war - but there are dangers

Wednesday, 4 March 2026 00:22

By Rowland Manthorpe, technology correspondent

Forget science fiction. The age of AI in war is here.

Israel has used AI systems in Gaza to flag potential targets and help prioritise operations.

The United States military reportedly used Anthropic's model, Claude, during its operation to abduct Nicolas Maduro from Venezuela.

And even after Anthropic got into difficulties with the US administration over exactly how AI should be used in war, the US military still apparently used Claude in its attack on Iran.

Iran latest: Trump criticises Starmer over UK stance

It is highly possible, experts say, that the missiles flying over Tehran today are being targeted by systems powered by AI.

"AI is changing the nature of modern warfare in the 21st century. It is difficult to overstate the impact that it has and will have," says Craig Jones, a senior lecturer in political geography from Newcastle University.

"It is a potentially terrifying scenario."

Terrifying or not, it seems there's no going back. If you want a sense of the importance the US military places on AI, a good place to start is a memo sent by defence secretary Pete Hegseth, who styles himself Secretary of War, to all senior military leaders early this year.

"I direct the Department of War to accelerate America's Military AI Dominance by becoming an 'AI-first' warfighting force across all components, from front to back," Mr Hegseth wrote.

This is not an experiment, this is a command - to adopt AI quickly, and at scale.

Or as Hegseth puts it: "Speed Wins".

Yet the scenario in question is not the one that might first spring to mind.

Yes, autonomy is increasing in some areas. In Ukraine, for example, there are drones capable of continuing a mission even after losing contact with a human operator.

But we are not at the stage of autonomous killer robots stalking the battlefield.

"We're not in the Terminator era just yet," says David Leslie, professor of ethics, technology and society at Queen Mary University of London.

The systems in which AI is being embedded - known as "decision support systems" in military jargon - are advisers which flag targets, rank threats and suggest priorities.

AI systems can pull together satellite imagery, intercepted communications, logistics data and social media streams - thousands, even hundreds of thousands of inputs - and surface patterns far faster than any human team.

The idea is that they help cut through the fog of war, allowing commanders to focus resources where they matter most, while potentially being more accurate than tired, overwhelmed, stressed human soldiers.

This means they're not just a tool, says Dr Jones, but a new way of making decisions.

"AI, as we see in our own lives, is more like an infrastructure," he says. "It's built into the system."

"We have this ability to collect that surveillance that we've been doing for some years.

"But now AI gives a stability to act on that and to kill the leader of Iran and to take out serious adversaries and serious enemies and find them in improbable ways in which they may have not been found before."

'A very persuasive tool'

Professor Leslie agrees that the new systems are extremely capable from a military perspective.

"The race for speed is what's driving this uptake," he says. "Making decision-making cycles faster is what brings military advantage of lethality."

An important feature of decision support systems is that the AI doesn't press the button. A human does. That has been the central reassurance in debates about military AI. There is always "a human in the loop".

As OpenAI, the company which makes ChatGPT, put it after announcing a partnership to supply the Pentagon with AI: "We will have cleared forward-deployed OpenAI engineers helping the government, with cleared safety and alignment researchers in the loop."

OpenAI has also emphasised that it had secured agreement with the Pentagon that its technology would not be used in ways that cross three "red lines": mass domestic surveillance, direct autonomous weapons systems and high-stakes automated decisions.

But even with a human in the loop, a question remains.

Read more:
AI willing to 'go nuclear' in wargames, study finds
Claude Opus 4.6: This AI just passed 'vending machine test'

When you're fighting a war, can a human really check each decision from an AI? When time is compressed and information is incomplete, what does "human oversight" really mean?

"Humans are technically in the loop," says Dr Jones.

"That doesn't mean, in my opinion, that they are in the loop enough to have effective decision-making power and oversight of exactly what's happened. The AI… is a very persuasive tool to people that make decisions."

Or as Professor Leslie puts it: "We are really facing a potential scaled hazard of… rubber stamping, where because of the speed involved, you don't have active human, critical human engagement to assess the recommendations that are being put out by these systems."

And then there's the question of AI's own fallibility.

Read more:
UK will deploy HMS Dragon in Cyprus, PM confirms
Iran Q&A: Why Trump could try to declare quick victory

Testing by Sky News found that neither Claude nor ChatGPT could tell how many legs a chicken had, if the chicken didn't look as it expected.

What's more, the AI insisted it was right, even when it was clearly wrong.

The example came from a paper which illustrated dozens of examples of similar failures. "It's not a one-off example of animal legs," said lead author Anh Vo.

"The problem is general across types of data and tasks," Vo added.

The reason is that AI doesn't really see the world in the human sense - they guess what's most probable based on past data.

Most of the time, that kind of statistical reasoning is astonishingly effective. The world is predictable enough that probabilities work.

But some environments are by their very nature unpredictable and high stakes.

We are testing the boundaries of this technology in the most unforgiving circumstances imaginable.

Sky News

(c) Sky News 2026: AI could be giving US lethal edge in Iran war - but there are dangers

Donate to Roch Valley Radio

 

Do you have a story for us? Want to tell us about something happening in our Borough?

Let us know by emailing newsdesk@rochvalleyradio.com

All contact will be treated in confidence.

More from Technology

Donate to Roch Valley Radio

 

Newsletter

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter and stay updated.

   

Coming up next On Air

  • Wide Awake Club

    2:00am - 5:00am

    For those who find themselves awake at 3am more often than they’d like. Calm music, understanding voices and quiet company.

  • Up Before the Alarm

    5:00am - 7:00am

    for early starters, commuters and anyone already on their second brew. Livelier music and new voices warming things up before breakfast.

  • Wednesday Breakfast

    7:00am - 10:00am

    getting you out of bed and to work and school with great music and headlines.

  • Business Spotlight

    10:00am - 11:00am

    Join Geoff Kirkman on The Business Spotlight show on Roch Valley Radio.

  • Community Spotlight

    11:00am - Noon

    Join Tim and Geoff on The Community Spotlight show on Roch Valley Radio is a platform for local charities and organizations to share their work with the community.

  • Ant's Pick n Mix

    Noon - 2:00pm

    with Anthony Kirkman bringing you Pop, Rock and your favourite Disco tunes with a sprinkling of Metal