The world is “nowhere close” to artificial intelligence (AI) becoming an existential threat, according to the Pentagon’s computer intelligence chief who predicts future supremacy in the field will all come down to data.
Recent headlines around generative models like ChatGPT have misled people about their power, according to Dr Craig Martell, the US defence department’s new chief digital and AI officer.
“It’s not a singular technology where if we have AI, we’re going to be successful and if the other guys have AI, we’re going to be in danger,” he said.
“It’s neither it’s neither a panacea nor Pandora’s box.“
Dr Martell was in London at Defence and Security Equipment International, one of the world’s largest arms fairs.
Following months of hype around large language models and predictions of human replacement by machines, it’s no surprise AI featured prominently at the event.
But according to Dr Martell, the power of AI to plan and execute wars is misunderstood.
“There is not an AI system employed in the modern world that doesn’t take data from the past, build a model from it, and use that to predict the future,” he said.
“So the moment the world is different, that model is no longer maximally effective.”
And in the fog of war, that might not make AI much use at all.
Please use Chrome browser for a more accessible video player
0:51
AI’s role in future of warfare
But among the weapons systems and surveillance tools at the London arms fair, AI is prominent.
Weapons are increasingly autonomous, missiles can move faster than a human’s ability to make decisions, and the volumes of data available from satellites and drones are increasing exponentially.
Arms firms tout AI as the tool to give commanders the edge but will the US military allow AI to make life and death decisions?
“Well, if I have my way, we won’t,” said Dr Martell. “There will always be a commander’s decision-making that deploys these systems.”
But that’s not to say the Pentagon isn’t aggressively pursuing AI.
It is under pressure from Congress to evaluate and integrate AI into its operations before its rivals do.
China has made it clear military applications of AI are part of its strategy to become a world leader in the field.
Vladimir Putin once declared the nation that achieves dominance in AI “will rule the world”.
Please use Chrome browser for a more accessible video player
2:21
Read more:
How AI could transform future of crime
Music labels in talks over AI-generated songs
Google insists AI won’t be replacing journalists
But according to Dr Martell, large language models, while technically exciting, are currently too unreliable for use in anything but the lowest risk activities within the defence department – perhaps writing the first draft of a memo.
Other AI tools, like computer vision technology or pattern recognition tools, are already widely used in the military and businesses – but each has to be assessed on a case-by case basis.
Last month the Pentagon launched Taskforce Lima, a programme to assess the suitability and safety of the newest generative AIs.
But according to Dr Martell, the main objective right now is corralling the “exabytes” of data that the US military has access to.
Just like an army marches on its stomach, its AIs will only work if fed high quality data.
“The value of that technology is going to be completely dependent upon the amount and quality of the data that we have,” said Dr Martell.
“A mass amount of uncurated, unlabelled data. It’s not information, it’s noise.”
Please use Chrome browser for a more accessible video player
0:22
The Pentagon has requested $1.4bn (£1.1bn) from Congress for centralising its data – everything from satellite surveillance imagery to troops’ fuel and food consumption.
The route to AI supremacy sounds more like an IT systems overhaul, than a rise of robot warriors.
And while the US military is pursuing a strategy of AI superiority, comparisons to an arms race like the one for nuclear weapons are misplaced.
“Where you either know how to do this, in which case you’ve unleashed a Promethean fire, or you don’t know how to do it, I don’t think this [AI] is anywhere close to that,” said Dr Martell.