Soldiers respond to the call. NATO announced on June 30 that it is establishing a $1 billion innovation fund that will invest in early stage startups and venture capital funds developing “priority” technologies, such as artificial intelligence, big data processing and automation.
Since the start of the war, the UK has launched a new AI strategy, specifically for defense, and the Germans have earmarked just under half a billion for research and artificial intelligence within a $100 billion cash injection to the military.
“War is a catalyst for change,” said Kenneth Payne, who leads defense studies research at King’s College London and is the author of the book I, Warbot: The Dawn of Artificially Intelligent Conflict.
The war in Ukraine has increased the urgency to push more AI tools onto the battlefield. Those with the most to gain are startups like Palantir, which hope to make money as soldiers race to update their arsenals with the latest technologies. But the long-standing ethical concerns about the use of AI in warfare have become more urgent as the technology becomes more sophisticated, while the prospect of restrictions and regulations on its use seems as distant as ever.
The relationship between technology and the military has not always been so friendly. In 2018, after protests and outcry from workers, Google pulled out of the Pentagon’s Project Maven, an effort to build image recognition systems to improve drone strikes. The episode sparked a heated debate about human rights and the morality of developing AI for autonomous weapons.
It also led leading AI researchers such as Yoshua Bengio, a Turing Prize winner, and Demis Hassabis, Shane Legg and Mustafa Suleyman, the founders of leading AI lab DeepMind, to promise not to work on deadly AI.
But four years later, Silicon Valley is closer to the armed forces of the world than ever. And it’s not just big companies either — startups are finally getting a peek, says Yll Bajraktari, who was previously executive director of the U.S. National Security Commission on AI (NSCAI) and now works for the Special Competitive Studies Project, a group lobbying for more adoption of AI in the US.
Companies selling military AI make elaborate claims about what their technology can do. They say it can help with everything from the mundane to the deadly, from screening resumes to processing data from satellites or recognizing patterns in data to help soldiers make faster decisions on the battlefield. Image recognition software can help identify targets. Autonomous drones can be used for surveillance or attacks on land, air or water, or to help soldiers deliver supplies more safely than is possible over land.