OpenAI announced that it has banned several ChatGPT accounts believed to have ties to Chinese government organizations. According to the company, those accounts had asked the AI to suggest ways to monitor social media conversations. This behavior, OpenAI said, breaks its rules for national security.
In its newest public threat report, OpenAI explained that some users tried to get the chatbot to design “social media listening” tools and other monitoring methods. These requests violate OpenAI’s policy, which states the AI must not help with actions that could harm people or nations.
The report also revealed that OpenAI disabled several accounts using the Chinese language that were involved in phishing and malware operations. Some of those accounts had asked the model to help automate using a Chinese tool called “DeepSeek.” The company said that such behavior is dangerous and violates its guidelines.
OpenAI did not limit its actions to Chinese-linked users. It also banned accounts tied to Russian-speaking criminal groups. According to the report, these groups were using the AI to develop malware or other harmful tools.
OpenAI began issuing public threat reports in February last year. Since then, it says it has uncovered and stopped more than 40 coordinated networks that misused ChatGPT. The company insists that its AI models reject clearly malicious requests. Importantly, OpenAI said it “found no evidence of new tactics or that our models provided threat actors with novel offensive capabilities.” In other words, the bad actors did not gain entirely new powers by using ChatGPT.

OpenAI has grown quickly. It now reports having more than 800 million users every week. Recently, through a secondary share sale, the company reached a valuation of about $500 billion, making it one of the most valuable startups globally.
In more detail, the issue began when certain accounts with suspected connections to Chinese government institutions used ChatGPT to develop proposals that could track and analyze social media activity. They asked the AI to come up with systems that would continuously monitor posts and conversations online. Such proposals fall under surveillance efforts, the kind of work that OpenAI prohibits under its national security rules.
Because these activities can affect people’s rights, democracy, or national sovereignty, OpenAI treats them seriously. The company said it banned the Chinese-language accounts partly for helping with phishing campaigns — sending fake messages to trick users into giving private information. Some also asked the AI to assist with malware — software designed to damage or control computers without permission. Among the requests was a plan to automate tasks using DeepSeek, a tool that may assist in large-scale monitoring or data collection.
OpenAI traced multiple users to Russian-speaking criminal networks that approached the chatbot to help build malicious software. These users, according to the report, were looking to cause harm by creating digital tools for cyberattacks.
Since its first public threat report, OpenAI has kept up transparency about how it fights misuse. The company claims over 40 threat networks have been disrupted. Each time, ChatGPT has refused to comply with malicious or illegal requests. OpenAI emphasized that although people attempted to use the system for harm, the AI did not enable new or advanced crimes that were previously impossible.
This action comes at a time when countries are racing to shape the future of AI. The United States and China, in particular, are pushing to influence how AI is regulated and used. The concern is that generative AI — systems like ChatGPT that can write texts, generate ideas, or answer questions — could be used for spying, propaganda, misinformation, or hacking, if misused.
By banning these accounts, OpenAI is attempting to protect its platform from being turned into a tool for national or global harm. AI firms increasingly carry responsibility because their models are powerful and widely available. OpenAI’s approach is to set rules, monitor behavior, and step in when rules are broken.
While OpenAI acted by banning accounts, the broader issue is how to guard AI systems against misuse in general. Governments, companies, and societies must think about how to balance innovation and security. Some argue for stronger regulations, audits, transparency, and oversight. Others believe self-regulation and public reporting may help.
The company says that its AI systems did not create new threats. That means the users did not get new, unexpected powers from ChatGPT. The misusers were trying to exploit existing capabilities. OpenAI’s safeguards — filters, review systems, behavior rules — are meant to block attempts at wrongdoing like surveillance, hacking, or creating dangerous software.
As AI spreads, it becomes more important to trust systems to follow rules and protect users. Every time misuse is detected and blocked, it reinforces public confidence that AI can be controlled. Still, many challenges remain. Bad actors may find new ways to push limits. OpenAI and other AI creators must continuously update defenses and policies.
In banning these accounts, OpenAI shows it takes misuse seriously. It also sends a signal that even powerful users or groups will not be allowed to use AI for surveillance, crime, or control. As AI becomes part of our world, protecting it from abuse is as important as making it better.