On March 19 (local time), Meta unveiled a significant overhaul of its content moderation approach, marking a substantial shift away from external contractors and toward the adoption of cutting-edge artificial intelligence (AI) systems for content review. For an extended period, Meta has relied heavily on a sprawling network of third-party content moderators, primarily based in countries like the Philippines and India, tasked with flagging hate speech, misinformation, and unsuitable content. Meta cited advancements in AI technology as the key catalyst for this strategic pivot, noting that the new system offers enhanced accuracy in identifying violations, minimizes over-enforcement mistakes, and enables quicker responses to emerging events.
Over the coming years, Meta plans to progressively roll out the AI system across all its platforms, contingent upon the system's consistent superiority over existing methods being validated. In the meantime, Meta underscored that human oversight will remain an integral part of the moderation process and will not be completely supplanted by AI.
