The U.S. Air Force revealed recently that it had used artificial intelligence to aid targeting decisions for the first time. It turns out that this was not simply a test: AI is embedded in the Air Force’s targeting operation, raising serious questions.
Secretary of the Air Force Frank Kendall told the Air Force Association’s Air, Space & Cyber Conference in National Harbor, Maryland on Sept. 20, that the Air Force had “deployed AI algorithms for the first time to a live operational kill chain.” He did not give details of the strike, whether it was by a drone or piloted aircraft, and if there were civilian casualties.
The “kill chain” is the entire province in which data gathered by various sensors is analyzed, targets selected and strikes planned and ordered and the results evaluated. AI takes some of the burden off human analysts, who spend thousands of hours searching through video footage trying to find, locate and positively identify targets. And the technology is now there for operational use.
The Air Force Distributed Common Ground System now uses AI for target recognition, reducing the … [+]
“These initial object recognition algorithms are available to Distributed Ground Stations, in order to augment intelligence operations,” an Air Force spokesperson told me.
The Distributed Ground Stations are the 27 geographically scattered elements of the Air Force’s Distributed Common Ground System, the brains of its intelligence-gathering operations and handles all the stages from planning and direction, collection, to processing and exploitation, to analysis and dissemination of the finished intelligence.
This involves sifting a vast amount of data: the daily workload from an average of 50 sorties is over 1,200 hours of video plus thousands of still images and signals intelligence reports. A U-2 or RQ-4 Global Hawk mission requires a crew of 45 intelligence staff to handle the returns; an MQ-9 Reaper requires 8. With more advanced sensors producing more terabytes of data than ever, the Air Force looks to Artificial Intelligence to take some of the load.
While AI is now involved, human beings are still very much in charge.
“The human intelligence professionals are the decision-makers,” says the Air Force spokesperson.
There have long been concerns about using AI for such purposes. Rather than the usual nightmare scenario of generals unleashing autonomous killing machines, the concern is that the machines will find the targets and delegate killing to the humans. In 2018, Google
employees forced the company to drop its involvement in the Pentagon’s Project Maven, which was working on computer-vision algorithms to aid analysts in counterinsurgency and counterterrorism operations. This known existence of such programs meant the U.S.A.F. move was not a great surprise.
“In recent years there has been a great deal of work to develop AI-based systems for targeting support,” Arthur Holland at the U.N. Institute for Disarmament Research told me.
Holland says that while people may be in charge, using AI to feed data to the intelligence professionals raises a number of questions. How is the AI tested and validated? How does the operator know when to trust the judgement of the AI? And when a lethal strike is launched on the basis of an AI error, who takes responsibility?
Even with human operators, terrible mistakes happen, such as the recent drone strike in Kabul that killed ten innocent people, including seven children. Analysts thought they saw explosives being placed in the back of vehicle, when in fact it was being loaded with containers of water. If an AI made the judgment, accountability may become blurred. Is the problem with those who designed the AI, the analysts who trusted it, or the pilot who fired the missile?
Holland is concerned that the new U.S. focus on over-the-horizon strikes will mean more emphasis on remote intelligence gathering, and a greater role for AI.
“Because ‘over-the-horizon’ operations probably lack some of the personnel and infrastructure on the ground that can help confirm the identity or location of a target prior to a strike, it’s possible that targeting software could be used to try and fill that gap,” says Holland.
There is a risk that this will start to roll back human involvement in the kill chain.
“If an AI system is labelling targets, especially in a time-sensitive situation, you become reliant upon the system getting it right which changes human control,” Jack McDonald, a lecturer in war studies at King’s College London, told me.
In a situation where intelligence staff are struggling to keep up with the flood of data, they may not have time to check everything. It may come down to green-lighting a strike based on the judgement of an AI or seeing a target escape. However, McDonald thinks that the people involved are unlikely to simply accept being pushed into the back seat. As with pilots and drones, nobody wants to see a machine taking their job.
“There will be a lot of professional resistance to complete automation in the military who are very aware of their responsibility when using lethal force,” says McDonald.
The Air Force sees the AI as augmenting human capability and improving decision-making. After all, mistakes like the one in Kabul are the result of human error, and tireless, patient machines which see everything will not make the same kind of mistakes as humans. However, given that many AIs are known to contain bias they may be prone to their own sort of errors.
Whatever the objections, AI is now part of the U.S.A.F. targeting machinery. It may be playing a small role at present, but as the demand increases and the technology improves, that role may expand.
“There are many potential paths AI algorithms can take for a variety of real-world scenarios as their capabilities and reliability are evaluated,” the Air Force spokesperson told me. “The Department will continue to mature algorithms and work to transition them when ready and appropriate over time for use.”