Last year, industry players including Microsoft, Google, Meta, OpenAI, Amazon, and Anthropic formed the Frontier Model Forum to promote safer and more responsible AI development. Earlier, Google, Amazon, Anthropic, Cisco, IBM, Intel, Microsoft, NVIDIA, OpenAI, PayPal, Wiz, Chainguard, GenLab, and Cohere formed the Artificial Intelligence Security Alliance.CoSAI, thereby improving the potential risks of artificial intelligence.
CoSAI builds on the AI Security Framework (SAIF), a framework Google proposed last year. It uses open-source, standardized frameworks and tools to help developers build more secure AI models while preventing them from being misused and abused, including malicious data that could affect model training accuracy or prompt injection that could affect model output.
In its initial phase, CoSAI will establish security specifications for the AI system software supply chain, fundamentally ensuring that AI systems are built with secure and reliable software and data, enabling faster and earlier detection of problems. It will also establish safeguards to identify and respond to emerging AI security threats.
CoSAI will also define specifications for developing AI systems and ensuring their safe use. It also plans to implement an evaluation system to allow developers to determine the security of AI systems. CoSAI will also facilitate and manage project development through a project management committee, while a technical steering committee composed of AI experts from academia and industry will provide oversight.



