Anthropic has launched three distinct types of Audit Agents, tasked with investigation, evaluation, and red team testing respectively. These Agents are designed to streamline the process of AI model alignment testing by enabling more extensive and concurrent audits. Anthropic has made the source code for these Agents publicly available on GitHub.