The 27th United Nations Science and Technology Conference was held in Geneva, Switzerland.The World Digital Technology Academy (WDTA) has released a series of breakthrough results, including two international standards: "Generative Artificial Intelligence Application Security Testing Standard" and "Large Language Model Security Testing Method".
It is reported that,These two international standards were created by OpenAI,Ant GroupIt was compiled jointly by many experts and scholars from dozens of units including , iFlytek, Google, Microsoft, NVIDIA, Baidu, and Tencent.Among them, the "Large Language Model Security Testing Method" was compiled by Ant Group.
The standards released this time bring together the wisdom of experts in the field of AI security around the world.It fills the gap in the field of security testing for large language models and generative AI applications, provides the industry with a unified testing framework and clear testing methods, and helps improve the security of AI systems, promote the responsible development of AI technology, and enhance public trust.
The World Digital Technology Academy (WDTA) is an international non-governmental organization registered in Geneva. It adheres to the guidance framework of the United Nations and is committed to promoting digital technology and promoting international cooperation on a global scale.
The AI STR (Safe, Trustworthy, Responsible) program is a core initiative of WDTA, which aims to ensure the safety, trustworthiness and responsibility of artificial intelligence systems. Ant Group, Huawei, iFlytek, the International Data Space Association (IDSA), Fraunhofer Institute, China Electronics, etc. are all its members.
Public data shows that Ant Group has been actively investing in trusted AI technology research since 2015 and has now establishedLarge ModelComprehensive security governance system.Ant Group has also developed its ownThe first"Yitianjian", an integrated large-model security solution, is used for AIGC security and authenticity evaluation, large-model intelligent risk control, AI robustness and explainability detection, etc.
The "Large Language Model Security Assessment Method" released this time is based on the application practice of the "Ant Tianjian" AI security detection system and is compiled through exchanges with global ecological partners. In addition, Ant Group has established a science and technology ethics committee and a special team within the company to assess and manage the risks of generative AI. All AI products of the company must pass the science and technology ethics assessment mechanism to ensure that AI is safe and reliable.