Artificial intelligence could be used to generate “unprecedented quantities” of realistic child sexual abuse material, an online safety group has warned.
The Internet Watch Foundation (IWF) said it was already finding “astoundingly realistic” AI-made images that many people would find “indistinguishable” from real ones.
Web pages the group investigated, some of which were reported by the public, featured children as young as three.
The IWF, which is responsible for finding and removing child sexual abuse material on the internet, warned they were realistic enough that it may become harder to spot when real children are in danger.
IWF chief executive Susie Hargreaves called on Prime Minister Rishi Sunak to treat the issue as a “top priority” when Britain hosts a global AI summit later this year.
She said: “We are not currently seeing these images in huge numbers, but it is clear to us the potential exists for criminals to produce unprecedented quantities of life-like child sexual abuse imagery.
“This would be potentially devastating for internet safety and for the safety of children online.”
Risk of AI images ‘increasing’
While AI-generated images of this nature are illegal in the UK, the IWF said the technology’s rapid advances and increased accessibility meant the scale of the problem could soon make it hard for the law to keep up.
The National Crime Agency (NCA) said the risk is “increasing” and being taken “extremely seriously”.
Chris Farrimond, the NCA’s director of threat leadership, said: “There is a very real possibility that if the volume of AI-generated material increases, this could greatly impact on law enforcement resources, increasing the time it takes for us to identify real children in need of protection”.
Mr Sunak has said the upcoming global summit, expected in the autumn, will debate the regulatory “guardrails” that could mitigate future risks posed by AI.
He has already met with major players in the industry, including figures from Google as well as ChatGPT maker OpenAI.
A government spokesperson told Sky News: “AI-generated child sexual exploitation and abuse content is illegal, regardless of whether it depicts a real child or not, meaning tech companies will be required to proactively identify content and remove it under the Online Safety Bill, which is designed to keep pace with emerging technologies like AI.
“The Online Safety Bill will require companies to take proactive action in tackling all forms of online child sexual abuse including grooming, live-streaming, child sexual abuse material and prohibited images of children – or face huge fines.”
Read more:
AI a ‘threat to democracy’
Why transparency is crucial to AI’s future
Please use Chrome browser for a more accessible video player
8:11
Offenders helping each other use AI
The IWF said it has also found an online “manual” written by offenders to help others use AI to produce even more lifelike abuse images, circumventing safety measures that image generators have put in place.
Like text-based generative AI such as ChatGPT, image tools like DALL-E 2 and Midjourney are trained on data from across the internet to understand prompts and provide appropriate results.
This content is provided by Spreaker, which may be using cookies and other technologies.
To show you this content, we need your permission to use cookies.
You can use the buttons below to amend your preferences to enable Spreaker cookies or to allow those cookies just once.
You can change your settings at any time via the Privacy Options.
Unfortunately we have been unable to verify if you have consented to Spreaker cookies.
To view this content you can use the button below to allow Spreaker cookies for this session only.
Click to subscribe to the Sky News Daily wherever you get your podcasts
DALL-E 2, a popular image generator from ChatGPT creator OpenAI, and Midjourney both say they limit their software’s training data to restrict its ability to make certain content, and block some text inputs.
OpenAI also uses automated and human monitoring systems to guard against misuse.
Ms Hargreaves said AI companies must adapt to ensure their platforms are not exploited.
“The continued abuse of this technology could have profoundly dark consequences – and could see more and more people exposed to this harmful content,” she said.