Meta announced earlier thatFrontier AI Framework The white paper, titled "Frontier Artificial Intelligence Framework," defines two types of AI systems: high-risk and critical-risk. Both types of systems could potentially be used in cybersecurity attacks or in areas related to chemical and biological weapons attacks. The difference is that the latter would cause uncontrollable and severe consequences, while the former, while posing a certain threat, is still controllable.
Meta stated that its assessment of AI system risk levels doesn't rely solely on a single objective metric, but rather integrates the opinions of internal and external researchers, with senior management reviewing and deciding. Meta also noted that currently, there's no quantitative basis for accurately measuring AI system risk. Therefore, the assessment of potential risks posed by current AI systems is primarily based on current research.
Meta's current approach for AI systems classified as high-risk is to restrict internal access and prevent external use until adequate safeguards are implemented. For AI systems identified as significant risk, special security measures are implemented to prevent potential security impacts. If a security issue arises, all development work will be suspended until safety is assured.
Previously, Meta had promised to build an AI system capable of performing any task or job, and to make it available to the public as a general-purpose AI system. However, this also sparked considerable market concern. After all, Meta's Llama large-scale natural language model is available as open source, has accumulated over hundreds of millions of downloads, and has even been used to develop AI systems for military purposes, potentially posing greater risks.
The release of the "Frontier AI Framework" white paper is clearly a response to the market's demand for Meta's artificial intelligence system development strategy, and emphasizes the importance of adding security protection design to artificial intelligence.









