Just a year after drastically reducing its proactive content moderation and third-party fact-checking mechanisms, Meta recently announced a major decision that will profoundly change the social media platform ecosystem: in the coming years, Meta will significantly reduce its human outsourced staff responsible for content moderation and fully hand over this heavy workload to AI systems based on large language models. The company emphasized that this will not only improve the speed and accuracy of blocking inappropriate content but also significantly expand the coverage of language support; human staff will take a backseat, focusing only on training the AI and handling the highest-risk critical decisions.
AI-powered censorship team launched: supports 98% of languages and boasts a "lower false positive rate".
Meta has been testing a large language model system for content moderation for some time and is very satisfied with the early test results.
According to official statements, the full implementation of the AI review system brings two significant advantages:
• Language coverage is soaring:Currently, human review teams can only support about 80 languages, but AI systems can handle the languages used by up to 98% of the world's internet population, which is crucial for combating harmful content in cross-border or less common languages.
• Improved accuracy (officially claimed):Despite many users complaining that the current system frequently deletes posts by mistake, Meta emphasizes that the next-generation AI system will not only be able to detect more of the "most serious" violations, but will also reduce misjudgments of "over-enforcement".
Of course, in addition to efficiency, significantly reducing the number of outsourced reviewers worldwide by thousands will also save Meta a considerable amount of operating costs.
Humans take a backseat: focusing on "high-risk" and "regulatory" decision-making.
Meta did not specify how many outsourced reviewers would be laid off during this transformation, but the company clearly defined the new role of human employees in the review mechanism.
In the future, human experts will focus on designing, training, and supervising AI systems, and measuring their performance. More importantly, humans will retain key decision-making power in the most complex and risky events. For example:
• Final appeal hearing after a user's account has been suspended.
• Notifications and follow-up actions involving law enforcement agencies (such as police and courts).
Recent update: Your Facebook customer service has also become AI.
Aside from backend content moderation, the most noticeable change for general users will be the frontend "customer support".
Meta announced the launch of an AI-powered "Support Assistant" within its Facebook and Instagram apps. This chatbot will assist users with handling reports, managing appeals, resetting passwords, and other account setup issues. Furthermore, the AI assistant will also provide assistance with the most troublesome issue of "account lockout/theft" (initially tested in select cases in the United States and Canada).


