OpenAI announced earlier that it would launchSafety Assessment Center (safety evaluations hub) to share information such as the probability of its artificial intelligence model producing hallucinations, and also to explain whether its artificial intelligence model will produce harmful content or whether it is likely to produce illegal information.
OpenAI stated that it provided this webpage resource to improve the transparency of its services and to explain the security protections taken when its models are running. It will also continue to update the security protection measures related to artificial intelligence models.
At the same time, OpenAI said it hopes this approach will make it easier for outsiders to understand its investment in the security of artificial intelligence models, and will make adjustments over time to ensure the security of its services when they are used. It also emphasizes improving the company's external communication efficiency through social interaction.
However, some believe that OpenAI's approach may still lead to concealing some problems or concerns. After all, the content published on the website is only account information and may not necessarily reflect the real situation.
