A slew of colorful headlines like ‘US air force denies running simulation in which AI drone ‘killed’ operator’ went predictably viral in the wake of reporting on a virtual test in which a military AI developed some unorthodox strategies to achieve its objective. This came just days after a warning about the existential threats posed by AI from industry figures. It was too good a story to be true, but that may not be what matters in the long run.
In the original version, Col Tucker “Cinco” Hamilton, chief of the USAF’s AI Test & Operations, described a simulated test involving a drone controlled by an AI which had been instructed to destroy enemy air defense systems.
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, during the Future Combat Air and Space Capabilities Summit in London. “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
When that option was removed, the AI instead attacked the operator’s communications link, to prevent them from hindering its mission.
A USAF spokesperson quickly denied that any such test had ever occurred, and suggested that Hamilton’s account was anecdotal rather than literal…which of course it was.
Hamilton himself quickly backtracked stating in an update that rather than being a war game, simulation or exercise, the events he described were the outcome of a ‘thought experiment,’ and that he had mis-spoken when he described it as a simulated test.
“We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,” Hamilton stated. He maintained that the scenario was a valid depiction of the potential dangers of AI.
While this retraction also received some coverage, it was already far too late. ‘A lie will go round the world while truth is pulling its boots on’ according to the old saying, and that is more true than ever in the age of social media. The correction will at best reach a fraction of the people who heard the original story.
The problem is that the narrative of a creations turning on its creator is an incredibly appealing one. Mary Shelley’s Frankenstein is frequently taken as the classic example of this trope – even if that’s not the real story of the book, this version has become embedded in the popular consciousness. Computers, AI and robots going bad are one of the best-established cliches in SF, from HAL 9000 in 2001 to Skynet’s Terminators, the Matrix, Westworld, Blade Runner and so on.
This narrative appears to be popular because, as heart, humans love scary stories, and nothing is scarier than the unknown. To those that do not understand it AI appears almost magical, a being with a will and intelligence of its own, one that could threaten us. As long as people believe this, the horror stories will keep coming.
“Wider education about the limitations of AI might help, but our love for apocalyptic horror stories might still win through,” researcher Beth Singler told New Scientist.
Such horror stories make robot or AI-controlled weapons more difficult to develop and field. Even if the political leadership understands the technology, it still has to win trust among those who are going to work with it.
“If soldiers don’t trust the system, they’re not going to want to use it,” national security consultant Zachary Kallenborn told Forbes.
Such stories have perhaps been a factor in the U.S. Army’s long delays over fielding armed remote-controlled ground robots, while the Air Force has flown armed drones for decades. When three SWORDS robots were deployed to Iraq in 2007, they were brought back without ever seeing action due to reported instances of ‘uncommanded movements’. The media turned this into SWORDS swivelling its guns and threatening to open fire like Robocop’s ED 209 ; the mundane reality boiled down to a loose wire and one instance where a robot slid backwards down a slope when a motor burned out.
The US Army’s armed robot program has remained in limbo ever since, while Russia has used armed (remote controlled) Uran-9 robots in action.
Another 2007 headline, Robot Cannon Kills 9, Wounds 14 described an incident in which a computerized South African anti-aircraft gun apparently went out of control and started firing at people, and was only stopped when one courageous soldier went in to deactivate it. The truth, a day or two later, was again duller: the gun was at the end of a row of several weapons, and accidentally fired a single burst of 15-20 rounds down the line of guns, causing the large number of casualties.
The military will continue to progress with AI, like the Air Force’s project to add artificial intelligence to its force of Reaper drones. Such projects will always cause a sharp intake of breath in the media, the public and the lawmakers they elect. The omnipresent Frankenstein/Terminator narrative, drowning out discussion about the real issues involved with autonomous weapons such as ethical considerations, accountability , lowering thresholds, algorithmic bias and ‘digital dehumanisation.’
As has been often noted, most pilots do not like drones, especially when they are better than humans. Col. Tucker, a fighter pilot, may not be quite the advocate that AI pilots need. The rogue drone story will become part of AI folklore, and Tucker’s words may have done much to ensure that AI development is policed, regulated and restrained in the Air Force. The simulated drone attack never happened in reality, but that may not be as important as what people remember,
Source: https://www.forbes.com/sites/davidhambling/2023/06/04/no-a-rogue-us-air-force-drone-did-not-just-try-to-kill-its-operator-but-it-might-as-well-have-done/