The results of the first public testing of AI large models in China reveal 281 security vulnerabilities
5 day ago / Read about 0 minute
Author:小编   

The results of the first public testing of AI large models in China were announced at the 22nd China Cybersecurity Annual Conference. Guided by the Cyberspace Administration of China and hosted by the National Computer Emergency Response Team, the event saw 559 white-hat hackers conduct security vulnerability tests on 15 AI large models and application products. The tests uncovered 281 security vulnerabilities, including 177 unique to large models, accounting for over 60%. These vulnerabilities revealed risks such as improper output, information leakage, and prompt injection, with traditional security vulnerabilities also being prevalent. Mainstream large model products such as Tencent's Hunyuan, Baidu's ERNIE Bot, and Alibaba's Tongyi App had fewer vulnerabilities. Finally, the authorities proposed four requirements for the security governance of AI large models.