Microsoft Open-Sources AI Agent Runtime Security Toolkit
5 day ago / Read about 0 minute
Author:小编   

On April 2nd, Microsoft introduced a new open-source software project, the Agent Governance Toolkit, aimed at building a runtime security governance framework for the autonomous AI agent domain. Licensed under the MIT License, this toolkit is designed for developers and enterprises seeking to deploy agent applications more securely and controllably in production environments. Microsoft claims that this is the first toolset to cover all ten 'Agentic AI Risks' identified by OWASP last year, including goal hijacking, tool misuse, identity abuse, and more. It aims to address systemic security issues that may arise when large models drive agents to execute complex tasks. The toolkit supports multiple language ecosystems, including Python, Rust, TypeScript, Go, and .NET, providing unified governance capabilities. Its architecture consists of several modules: Agent OS serves as a policy engine for intercepting and evaluating policies before agent behavior execution; Agent Mesh ensures secure communication between agents; Agent Runtime implements dynamic execution environment isolation; Agent SRE provides runtime security protection and stability assurance; Agent Compliance supports automated compliance verification and rating; Agent Marketplace manages plugin lifecycles and controls supply chain risks; Agent Lightning offers governance capabilities for training scenarios such as reinforcement learning. The project code has been hosted in a GitHub repository, and developers are invited to participate in trials and provide feedback.