Machines have long been good at flying planes, says Sue Halpern in The New Yorker. “The first autopilot system, which involved connecting a gyroscope to the wings and tail of a plane, debuted in 1914.” But now the US military wants to develop a plane that can “fly and engage in aerial combat – dogfighting – without a human pilot operating it”.
In August 2020 the US government organised the AlphaDogfight Trials, a competition to find the best AI dogfighting algorithms. It looked basic: a bunch of coin-sized symbols moving around on a screen “at a stately pace”. But it proved instructive. Once the eight finalists had scrapped it out, the winner took on a human pilot on a simulator. They fought five skirmishes. The AI won every single one.
It’s still early days, of course: transferring the algorithm to an actual aircraft is no easy task. And the military doesn’t want to replace pilots altogether – the aim is for them to fly “in partnership” with the AI, monitoring and intervening when necessary. In theory, the pilots will be able to become “battle managers”, directing squads of unmanned aircraft to different targets. Intelligence suggests that China has already “turned decommissioned fighter jets into autonomous suicide drones that can operate together as a swarm”.
One big problem is trust. Pilots are, understandably, reluctant to give up the controls – with planes flying at up to 500mph, the algorithms won’t always be able to keep them in the loop. Another is ethics. Do we really want machines deciding whether to fire a missile or drop a bomb?