Intel announced that it has joined the MLCommons AI Security Working Group and will collaborate with a team of industry and academic experts to establish artificial intelligence security standards and create guidelines for responsible artificial intelligence development.
As a founding member, Intel will contribute its expertise to help create a flexible AI security standards platform to measure the security of AI tools and models, as well as potential risk factors.
As testing matures, the AI safety standards developed by the working group will become an important criterion for society to view AI deployment and security.
Deepak Patil, Intel corporate vice president and general manager of Data Center AI Solutions, said Intel is committed to promoting the responsible development of artificial intelligence so that everyone can use artificial intelligence equally and safely. Intel will view security issues from a comprehensive perspective, while innovating in hardware and software to promote the ecosystem to build trustworthy artificial intelligence.
Due to the ubiquity of large-scale natural language models, it is crucial to connect the entire ecosystem to address the security issues of AI development and deployment. Deepak Patil expressed his willingness to work with the industry to define new processes, methods, and standards to enhance the ubiquitous application of artificial intelligence in the future.
To mitigate the societal risks posed by powerful technologies and to responsibly train and deploy large natural language models and tools, the working group will provide a safety rating system to assess the risks posed by new and rapidly evolving artificial intelligence technologies.
The working group's initial focus will be developing security standards for large natural language models, building on the work of researchers at Stanford University's Center for Foundational Models and the Comprehensive Evaluation of Language Models (HELM) project. Intel will share with the AIS working group its rigorous, cross-disciplinary review process for developing AI models and tools, helping to establish a common set of best practices and standards for evaluating the secure development and deployment of generative AI tools that leverage large natural language models.



