Given what has happened to the world in the 2020s, 2018 seems like a long time ago.
That was when Google employees protested their company’s involvement in Project Maven, an artificial intelligence and machine learning project with the U.S. Department of Defence harnessing those technologies for military surveillance use.
Around 4,000 Google employees signed the letter, accusing the company of being “in the business of war.”
Fast forward to 2022, and the arguments on the ethical use of AI in defense and war seem to have been put to one side in the maelstrom of Russia’s invasion of Ukraine. As the current conflict has shown, AI has put the drone at the forefront of military aviation.
The conferences, consultations, and research papers the technology industry and defense practitioners produced on this subject before the current conflict now seem irrelevant as AI-enabled drones on the Ukrainian side take out Russian tanks and ships, AI is paired with satellite images to deliver vital intelligence, and AI is used to identify the dead using facial recognition technology.
The ethics depend on which side is using AI and which side is in the right. But beyond that, it is becoming increasingly difficult to ringfence AI technology and its applications.
“Ruler of the world”
The next arms race is here already, and it is just as much about using AI to advantage as it is about deterrence from nuclear weapons.
The Russian Ministry of Defence, as a case in point, announced the creation of a special AI department with its budget, which began work last December. Even as long ago as 2017, President Putin said that the advent of AI raised “colossal opportunities and threats that are difficult to predict,” but that whoever mastered it “becomes the leader in this sphere and the ruler of the world.”
Pull out quote: AI is just too powerful not to use in a war context when a nation’s existence is on the line
Boiled down, there are two critical factors at play. AI is just too powerful not to use in a war context when a nation’s existence is on the line. It is also becoming so ubiquitous in civilian applications that some are crossing over and finding use on the Ukrainian battlefields.
Whether we rewind to reconsider any of these issues once this current war is over remains to be seen. But when this war is remembered in the future, it will be as the world’s first digital war and the first one in which AI was widely used.
AI is not the only key to the success of the Ukrainian drone attacks. Russia has used it to make deepfake videos for disinformation. Both parties have used it to analyze the vast amounts of data from the battlefield and social media as they plan their attacks.
The sometimes controversial facial recognition company, Clearview AI, has handed over its technology to the Ukrainian Government. The BBC has reported it has already been used to identify more than one thousand people, both dead and alive. It is also being used at checkpoints by Ukrainian soldiers seeking to identify enemy suspects.
Clearview has aggregated billions of social media photographs to amass a vast database which is essentially a search engine for faces. While the company has been challenged in civilian applications, with the U.K. Information Commissioner fining the company for failure to inform users, it is now in full use in the war.
“We saw images of prisoners of war, of people fleeing, and we thought our technology could be useful for people identification and verification,” said Clearview chief executive Hoan Ton-That.
Another AI company active in Ukraine is U.S. government contractor Maxar, which provides satellite imagery with an AI overlay from a constellation of 90 orbiting satellites.
Big data firm Palantir Technologies, which uses its software and applies it to images taken from hundreds of satellites, was vital in delivering intelligence before the Russian invasion. In one two-day period, Palantir analyzed images from 1,200 satellite flyovers.
Even if the Ukrainian war is contained, the world would seem to be entering a new Cold War, one in which AI will be critical. As the conflict intensifies, the ethical dimension of the defense industries’ use of AI may become a casualty.
This is despite the risks of algorithmic malfunctions on the battlefield and in weapons systems and the potentially catastrophic accidents that could occur.
The Ukrainian war has changed so much about our perceptions of the stability of the global order, and it may also have changed the idea that AI needs to be guided by an ethical framework in a conflict that is an existential threat.
Would Google employees respond differently today to a new Project Maven? Perhaps they would, and perhaps we’ll find out soon.
Lachlan Colquhoun is the Australia and New Zealand correspondent for CDOTrends and the NextGenConnectivity editor. He remains fascinated with how businesses reinvent themselves through digital technology to solve existing issues and change their entire business models. You can reach him at [email protected].
Image credit: iStockphoto/Aleksandr Mokshyn