ChatGPT maker says influence operations failed to gain traction or reach large audiences.
Artificial intelligence company OpenAI has announced that it disrupted covert influence campaigns originating from Russia, China, Israel and Iran.
The ChatGPT maker said on Thursday that it identified five campaigns involving “deceptive attempts to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them”.
The campaigns used OpenAI’s models to generate text and images that were posted across social media platforms such as Telegram, X, and Instagram, in some cases exploiting the tools to produce content with “fewer language errors than would have been possible for human operators,” OpenAI said.
Open AI said it terminated accounts associated with two Russian operations, dubbed Bad Grammer and Doppelganger; a Chinese campaign known as Spamouflage; an Iranian network called International Union of Virtual Media; and an Israeli operation dubbed Zero Zeno.
“We are committed to developing safe and responsible AI, which involves designing our models with safety in mind and proactively intervening against malicious use,” the California-based start-up said in a statement posted on its website.
“Detecting and disrupting multi-platform abuses such as covert influence operations can be challenging because we do not always know how content generated by our products is distributed. But we are dedicated to finding and mitigating this abuse at scale by harnessing the power of generative AI.”
Bad Grammar and Doppelganger largely generated content about the war in Ukraine, including narratives portraying Ukraine, the United States, NATO and the European Union in a negative light, according to OpenAI.
Spamouflage generated text in Chinese, English, Japanese and Korean that was critical of prominent critics of Beijing, including actor and Tibet activist Richard Gere and dissident Cai Xia, and highlighted abuses against Native Americans, according to the startup.
International Union of Virtual Media generated and translated articles that criticised the US and Israel, while Zero Zeno took aim at the United Nations agency for Palestinian refugees and “radical Islamists” in Canada, OpenAI said.
Despite the efforts to influence the public discourse, the operations “do not appear to have benefited from meaningfully increased audience engagement or reach as a result of our services,” the firm said.
The potential for AI to be used to spread disinformation has emerged as a major talking point as voters in more than 50 countries cast their ballots in what has been dubbed the biggest election year in history.
Last week, authorities in the US state of New Hampshire announced they had indicted a Democratic Party political consultant on more than two dozen charges for allegedly orchestrating robocalls that used an AI-created impersonation of US President Joe Biden to urge voters not to vote in the state’s presidential primary.
During the run-up to Pakistan’s parliamentary elections in February, jailed former Prime Minister Imran Khan used AI-generated speeches to rally supporters amid a government ban on public rallies.