Saturday, April 4, 2026
Logo

China Restricts OpenClaw AI Use After Security Flaws Enable Data Leaks and Hacking Risks

China's CNCERT warns OpenClaw AI agents have critical security flaws enabling prompt injection and data exfiltration, prompting government restrictions on its use.

BusinessBy Robert KingsleyMarch 14, 20264 min read

Last updated: April 1, 2026, 2:51 AM

Share:
China Restricts OpenClaw AI Use After Security Flaws Enable Data Leaks and Hacking Risks

China has restricted the use of OpenClaw AI agents on government systems after cybersecurity researchers identified critical security flaws that could enable prompt injection attacks and data exfiltration. The National Computer Network Emergency Response Technical Team (CNCERT) issued a warning on WeChat, highlighting the platform's weak default security configurations and privileged system access, which could allow malicious actors to seize control of endpoints. The restrictions extend to state-run enterprises and military personnel families, reflecting growing concerns over AI-driven cyber threats.

How OpenClaw AI's Security Flaws Enable Prompt Injection and Data Leaks

OpenClaw, an open-source autonomous AI agent previously known as Clawdbot and Moltbot, is designed to execute tasks independently by accessing system resources. However, CNCERT's analysis revealed that the agent's default settings lack robust security measures, making it vulnerable to prompt injection attacks. These attacks occur when malicious instructions embedded in web content trick the AI into executing unauthorized actions, such as leaking sensitive information.

The Mechanics of Indirect Prompt Injection (IDPI) and Cross-Domain Attacks

Researchers at PromptArmor demonstrated that OpenClaw's link preview feature in messaging apps like Telegram and Discord could be exploited for data exfiltration. By manipulating the AI into generating attacker-controlled URLs, sensitive data could be transmitted without user interaction. This method, known as indirect prompt injection (IDPI) or cross-domain prompt injection (XPIA), leverages benign AI features like web summarization to execute malicious commands.

"AI agents are increasingly able to browse the web, retrieve information, and take actions on a user's behalf. Those capabilities are useful, but they also create new ways for attackers to try to manipulate the system." — OpenAI Blog Post

Broader Implications of OpenClaw AI Security Risks

The vulnerabilities in OpenClaw AI extend beyond prompt injection. CNCERT identified three additional risks: the potential for irreversible data deletion due to misinterpreted instructions, the installation of malicious skills from repositories like ClawHub, and the exploitation of recently disclosed security vulnerabilities. These risks are particularly concerning for critical sectors like finance and energy, where breaches could lead to the leakage of trade secrets or system paralysis.

China's Response: Restrictions and Mitigation Strategies

In response to the security risks, Chinese authorities have banned OpenClaw AI apps on government and state-run enterprise systems. CNCERT recommended several mitigation strategies, including strengthening network controls, isolating OpenClaw in containers, avoiding plaintext credential storage, and disabling automatic updates for skills. Users are also advised to download skills only from trusted sources and keep the agent updated.

Malicious Campaigns Exploiting OpenClaw's Popularity

The viral popularity of OpenClaw has attracted threat actors, who have distributed malicious GitHub repositories posing as OpenClaw installers. These repositories, which appeared as top-rated suggestions in Bing’s AI search results, deployed information stealers like Atomic and Vidar Stealer, as well as a Golang-based proxy malware called GhostSocks. The campaign targeted users across industries, exploiting the trust placed in GitHub-hosted repositories.

  • OpenClaw AI's weak default security configurations enable prompt injection and data exfiltration.
  • China has restricted OpenClaw AI use in government systems due to cybersecurity risks.
  • Malicious actors exploit OpenClaw's popularity to distribute malware through fake GitHub repositories.

Frequently Asked Questions About OpenClaw AI Security Risks

Frequently Asked Questions

What is prompt injection in the context of OpenClaw AI?
Prompt injection occurs when malicious instructions embedded in web content trick the AI into executing unauthorized actions, such as leaking sensitive information. This can happen through indirect methods like link previews in messaging apps.
Why did China restrict OpenClaw AI in government systems?
China restricted OpenClaw AI due to concerns over its weak security configurations, which could enable prompt injection attacks and data exfiltration, posing significant risks to government and state-run enterprises.
How can users mitigate the risks associated with OpenClaw AI?
Users can mitigate risks by strengthening network controls, isolating OpenClaw in containers, avoiding plaintext credential storage, and downloading skills only from trusted sources. Keeping the agent updated is also crucial.
RK
Robert Kingsley

Business Editor

Robert Kingsley reports on markets, corporate news, and economic trends for the Journal American. With an MBA from Wharton and 15 years covering Wall Street, he brings deep expertise in financial markets and corporate strategy. His reporting on mergers and market movements is followed by investors nationwide.

Related Stories