As large-scale AI models like Deepseek-R1, GPT-5, and Tongyi Wanxiang continue to gain popularity, the technology of artificial intelligence-generated content (AIGC) has drawn considerable attention. While AIGC technology is propelling the evolution of content creation and cultural dissemination, it also lays the groundwork for the emergence and proliferation of new threats to social security. These include misinformation, synthetic media misuse, and deepfake technology. Consequently, there's an urgent need to develop cross-modal perception and fine-grained forgery detection techniques for AI-generated information to tackle the challenges posed by complex, multi-layered forgeries.
