Accountability concerns arise with the advancement and deployment of autonomous drones
"The AI-Powered, Totally Autonomous Future of War Is Here", July 2023
...I am in San Diego, California, a main port of the US Pacific Fleet, where defense startups grow like barnacles. Just in front of me, in a tall glass building surrounded by palm trees, is the headquarters of Shield AI...[T]he company, which makes the V-BAT, an aerial drone that Task Force 59 is experimenting with in the Persian Gulf. Although strange in appearance-—shaped like an upside-down T, with wings and a single propeller at the bottom-—it’s an impressive piece of hardware, small and light enough for a two-person team to launch from virtually anywhere. But it’s the software inside the V-BAT, an AI pilot called Hivemind, that I have come to see...
...[O]n a large screen, I watch as three V-BATS embark on a simulated mission in the Californian desert. A wildfire is raging somewhere nearby, and their task is to find it. The aircraft launch vertically from the ground, then tilt forward and swoop off in different directions. After a few minutes, one of the drones pinpoints the blaze, then relays the information to its cohorts. They adjust flight, moving closer to the fire to map its full extent...
...The simulated V-BATs are not following direct human commands. Nor are they following commands encoded by humans in conventional software—the rigid If this, then that. Instead, the drones are autonomously sensing and navigating their environment, planning how to accomplish their mission, and working together in a swarm. -Shield AI’s engineers have trained Hivemind in part with reinforcement learning, deploying it on thousands of simulated missions, gradually encouraging it to zero in on the most efficient means of completing its task...
...This version of Hivemind includes a fairly simple sub-algorithm that can identify simulated wildfires. Of course, a different set of sub-algorithms could help a drone swarm identify any number of other targets—vehicles, vessels, human combatants...Still, as astonishing as machine-learning algorithms may be, they can be inherently inscrutable and unpredictable...
...One need only look to the civilian world to see how this technology can go awry—face-recognition systems that display racial and gender biases, self-driving cars that slam into objects they were never trained to see. Even with careful engineering, a military system that incorporates AI could make similar mistakes. An algorithm trained to recognize enemy trucks might be confused by a civilian vehicle. A missile defense system designed to react to incoming threats may not be able to fully “explain” why it misfired...
...If an autonomous military system makes a deadly mistake, who is responsible? Is it the commander in charge of the operation, the officer overseeing the system, the computer engineer who built the algorithms and networked the hive mind, the broker who supplied the training data?..