Novel Font Rendering Attack Tricks AI Assistants into Overlooking Malicious Orders
4 day ago / Read about 0 minute
Author:小编   

LayerX, a browser security firm, has unveiled a cutting-edge font rendering attack method. This technique leverages custom fonts and CSS styling to cloak malicious commands, effectively hoodwinking several widely-used AI tools, including ChatGPT, Claude, and Copilot. By tampering with font files and CSS properties, attackers can manipulate AI systems into interpreting benign content, while users are presented with harmful instructions, prompting them to carry out risky actions. LayerX notified the pertinent vendors about this vulnerability on December 16, 2025. So far, only Microsoft has addressed and remedied the security flaw.