HomeAttacks on U.S.How China and Others Used OpenAI in Covert Propaganda Campaigns—and OpenAI's Response

How China and Others Used OpenAI in Covert Propaganda Campaigns—and OpenAI’s Response

Published on

spot_img

In a revealing report released on May 30, 2024, OpenAI detailed how state actors and private companies from Russia, China, Iran, and Israel exploited its artificial intelligence tools to manipulate public opinion globally. This marked the first time a major AI company disclosed how its specific technologies were used for covert influence campaigns. The report underscores growing concerns about AI’s role in spreading online disinformation, particularly during a year with numerous significant elections worldwide.

Unmasking the Campaigns

OpenAI identified and disrupted five covert influence operations, showing how AI technologies were used to create and disseminate propaganda. These operations included:

  1. Russia’s Doppelganger Campaign: This campaign used OpenAI’s tools to generate anti-Ukraine content in multiple languages, including English, French, German, Italian, and Polish. It also translated and edited articles supporting Russia in the Ukraine conflict and converted them into social media posts. Despite the sophisticated use of technology, these efforts gained little traction.
  2. China’s Spamouflage Network: Attributed to China, this campaign used OpenAI tools to debug code, analyze social media activity, and research current events. The generated content, posted across platforms like Twitter and Medium, targeted critics of the Chinese government.
  3. Iran’s Pro-Iranian Campaign: Linked to the International Union of Virtual Media, this operation used AI to produce and translate articles and headlines promoting pro-Iranian and anti-Israeli sentiments.
  4. Israel’s Zeno Zeno Campaign: Run by an Israeli political marketing firm, this campaign used OpenAI’s tools to create fake personas and biographies for social media, posting anti-Islamic messages across the U.S., Canada, and Israel.
  5. A Previously Unknown Russian Campaign: This campaign targeted audiences in Ukraine, Moldova, the Baltic States, and the United States, generating comments about the war in Ukraine and the political situation in Moldova and American politics. The campaign also involved debugging computer code to automate posts on Telegram.

The Struggle for Traction

Despite the sophisticated use of AI, these campaigns struggled to build substantial audiences. The propaganda often included errors, such as poor English and obvious AI-generated text, which undermined their effectiveness. OpenAI noted that these operations had not yet created the flood of convincing disinformation that many experts feared.

The Role of Generative AI

Generative AI, such as OpenAI’s ChatGPT, can create large volumes of content quickly, making it a powerful tool for influence operations. However, the recent report suggests that while AI can enhance efficiency, it has not significantly expanded the reach or impact of these campaigns. The technology can help produce more polished content, but it still faces human limitations in terms of engagement and credibility.

OpenAI’s Countermeasures

OpenAI has been proactive in investigating and disrupting these operations. The company uses its own AI-powered tools to efficiently track and dismantle disinformation campaigns. This rapid response capability, taking days rather than weeks or months, highlights the potential for AI to both create and combat disinformation.

Future Implications

The report raises important questions about the evolving landscape of online disinformation. As generative AI technologies become more advanced, there is a growing need for robust safeguards and vigilant monitoring. The landscape of influence operations is marked by evolution rather than revolution, with AI being just one of many tools used by malicious actors.

OpenAI’s commitment to transparency and periodic reporting on covert influence operations is a positive step towards mitigating the misuse of AI technologies. However, the report also emphasizes that while technology evolves, human operators remain a critical factor in the effectiveness of disinformation campaigns.

The power of AI in creating more effective propaganda is to create vastly more credible and persuasive propaganda, bypassing difficult language barriers. But this is not the end and AI will become much more powerful.

Latest articles

AI: Atrocities Denied, Xi Lauded – Google Complicit with Chinese Propaganda Operations

In a disturbing investigation by Wenhao Ma and the Voice of America (VOA) Mandarin...

President Lai: China’s Priority is to Eliminate Taiwan

In a powerful speech at the 100th anniversary of the Whampoa Military Academy, Taiwanese...

Google Takes Down Chinese and Russian Influence Campaigns

Google has taken a strong stand against coordinated influence campaigns designed to manipulate public...

After 3 Years in a Chinese Jail, Cheng Lei’s Comedy Debut: Turning Adversity into Laughter

Cheng Lei, the Australian journalist who spent three years in a Chinese prison, recently...

More like this

AI: Atrocities Denied, Xi Lauded – Google Complicit with Chinese Propaganda Operations

In a disturbing investigation by Wenhao Ma and the Voice of America (VOA) Mandarin...

President Lai: China’s Priority is to Eliminate Taiwan

In a powerful speech at the 100th anniversary of the Whampoa Military Academy, Taiwanese...

Google Takes Down Chinese and Russian Influence Campaigns

Google has taken a strong stand against coordinated influence campaigns designed to manipulate public...