The Meta Oversight Board recently issued a decision, urging the company to promptly clarify and tighten regulatory rules to address the rapid spread of AI-generated content and deepfake videos in high-risk contexts such as wars, disasters, and elections. The body pointed out that Meta has significant shortcomings in labeling and identifying large-scale AI-generated content, a flaw that could mislead the public, especially during conflicts and crises.
