What Is The Pentagon’s Updated Policy On Killer Robots?

The Pentagon has issued an update to its Directive 3000.09, which covers what they term Autonomy in Weapon Systems and others call ‘killer robots.’ Current drones like the Air Force and CIA MQ-9 Reapers are operated by remote control: a human being sitting in front of a video screen identifies targets on the ground thousands of miles away, places them in the crosshairs and releases a Hellfire missile or other weapon. Autonomous weapons are different: they pick their own targets without any human intervention. Clear rules are needed on when and how they can be used, and the new directive brings them a step closer.

Ten years ago when the first version of 3000.09 was released, autonomous weapons looked like science fiction. Now they are very real. The U.N. claimed that Turkish-supplied drones attacked targets autonomously in Libya in 2020 and Russia is now deploying loitering munitions in Ukraine with autonomous capability.

Many activists, like the Campaign to Stop Killer Robots, want an outright ban on autonomous weapons, insisting any remote weapon remains under meaningful human control at all times. The U.N. has been debating how to control such arms for many years.

However, as the new directive makes clear, the Pentagon is sticking to a different line.

“The DoD has consistently opposed a policy standard of ‘meaningful human control’ when it comes to both autonomous systems and AI systems,” Gregory Allen, director of the Project on AI Governance at the Center for Strategic and International Studies, told me. “The preferred DoD term of art is ‘appropriate levels of human judgement,’ which reflects the fact that in some cases – autonomous surveillance aircraft and some kinds of autonomous cyber weapons, for example – the appropriate level of human control may be little to none.”

Just what autonomous weapons would be permitted under what circumstances? Allen believes the previous version of the directive was so unclear that it discouraged any development in this area.

“Confusion was so widespread – including among some DoD senior leaders – that officials were refraining from developing some systems that were not only allowed by the policy, but also expressly exempted from the senior review requirement,” says Allen.

Not a single weapon has been submitted to the review process for autonomous weapons laid out in the original 3000.09 in the ten years since it was published.

Allen wrote an essay on this for CSIS last year, describing four areas that needed work – formally defining autonomous weapon systems, saying what “AI-enabled” means for the policy, how the review process will handle retraining machine learning models, and clarifying what types of weapons require have to go through the arduous review process.

“The DoD has implemented all of them,” says Allen.

In principle then, this should ensure what the DoD term “strong and continuing commitment to being a transparent global leader in establishing responsible policies regarding military uses of autonomous systems.”

However, there are some additions which might be seen as loopholes, such as an exemption from senior review for autonomous weapons defending drones which do not target people (‘anti-material weapons’) and but which would be allowed to target missiles, other drones and potentially other systems.

“The word ‘defending’ is doing a ton of work,” Zak Kallenborn, a policy fellow at the Schar School of Policy and Government at George Mason University told me. “If a drone is operating in enemy territory, almost any weapon could be construed as ‘defending’ the platform.”

Kallenborn also notes that while effectively autonomous weapons like landmines have been in use for over a century, the landscape is changing rapidly because of the advances in AI and particular machine learning. These have given rise to systems which are very capable, but technically brittle – when they fail, they fail spectacularly in ways that no human would, for example mistaking a turtle for a rifle.

“Autonomy through AI definitely deserves more concern, given the brittleness and lack of explainability of currently dominate approaches,” says Kallenborn.

The update is not a big one. But it does show the Pentagon’s continued commitment towards developing effective autonomous weapons and a belief they can comply with international humanitarian law — distinguishing civilians from military personnel, seeking to avoid harming civilians, and only using proportionate and necessary force.

Campaigners believe that AI will not possess the necessary understanding to make moral judgements in wartime and risks creating a world where war is automated and humans are no longer in control. Others believe the U.S. military will be outmatched by opponents with autonomous weapons unless AI is incorporated at a tactical level, and that too much human involvement slows down military robots.

The argument is likely to continue even as autonomous weapons start to appear, and the results will be followed closely. Either way, it seems the killer robots are coming.

Source: https://www.forbes.com/sites/davidhambling/2023/01/31/what-is-the-pentagons-updated-policy-on-killer-robots/