OpenAI has banned accounts using ChatGPT for malicious purposesMisinformation and surveillance campaigns were uncoveredThreat actors are increasingly using AI for harm
OpenAI has confirmed it recently identified a set of accounts involved in malicious campaigns, and banned users responsible.
The banned accounts involved in the ‘Peer Review’ and ‘Sponsored Discontent’ campaigns likely originate from China, OpenAI said, and “appear to have used, or attempted to use, models built by OpenAI and another U.S. AI lab in connection with an apparent surveillance operation and to generate anti-American, Disrupting malicious uses of our models: an update February 2025 3 Spanish-language articles”.
AI has facilitated a rise in disinformation, and is a useful tool for threat actors to use to disrupt elections and undermine democracy in unstable or politically divided nations – and state-sponsored campaigns have used the technology to their advantage.
Surveillance and disinformation
The ‘Peer Review’ campaign used ChatGPT to generate “detailed descriptions, consistent with sales pitches, of a social media listening tool that they claimed to have used to feed real-time reports about protests in the West to the Chinese security services”, OpenAI confirmed.
As part of this surveillance campaign, the threat actors used the model to “edit and debug code and generate promotional materials” for suspected AI-powered social media listening tools – although OpenAI was unable to identify posts on social media following the campaign.
ChatGT accounts participating in the ‘Sponsored Discontent’ campaign, were used to generate comments in English and news articles in Spanish, consistent with ‘spamouflage’ behavior, primarily using anti-American rhetoric, probably to spark discontent in Latin America, namely Peru, Mexico, and Ecuador.
This isn’t the first time Chinese state-sponsored actors have been identified using ‘spamouflage’ tactics to spread disinformation. In late 2024, a Chinese influence campaign was discovered targeting US voters with thousands of AI generated images and videos, mostly low-quality and containing false information.
You might also like
Take a look at our picks for the best AI tools aroundCheck out our recommendations for the best malware removal softwareNorton boosts AI scam protection tools for all users