OpenAI and Anthropic agree to submit new models to the US government for safety assessment before launching

Artificial Intelligence Companies OpenAI and Anthropic Agreed to allowUSAThe government is accessing major new AI models before these companies release them to help improve their security.

OpenAI and Anthropic agree to submit new models to the US government for safety assessment before launching

The US AI Safety Institute announced on Thursday thatThe two companies have signed a memorandum of understanding with the institute, committing to provide access to the model before and after it is publicly released.The U.S. government said the move would help them jointly assess security risks and mitigate potential issues. The agency said it would work with its British counterparts to provide feedback on security improvements.

Jason Kwon, Chief Strategy Officer at OpenAI, expressed support for the collaboration:

  • “We strongly support the mission of the National AI Safety Institute and look forward to working together to develop safety best practices and standards for AI models. We believe the institute plays a key role in ensuring American leadership in the responsible development of AI. We expect that through our collaboration with the institute, we can provide a framework that the world can learn from.”

Anthropic also said that it is important to build the ability to effectively test AI models. Jack Clark, the company’s co-founder and head of policy, said:

  • “Ensuring AI is safe and trustworthy is critical to enabling the technology to have a positive impact. Through testing and collaboration like this, we can better identify and mitigate the risks posed by AI and promote responsible AI development. We are proud to be part of this important work and hope to set a new standard for the safety and trustworthiness of AI.”

Sharing access to AI models is an important move as federal and state legislatures consider how to place limits on the technology without stifling innovation. On Wednesday, California lawmakers passed the Frontier AI Model Safety Innovation Act (SB 1047), which requires California AI companies to take specific safety measures before training advanced underlying models. This has drawn opposition from AI companies including OpenAI and Anthropic, who warned that it could hurt smaller open source developers, although the bill has been amended and is still awaiting the signature of California Governor Gavin Newsom.

Meanwhile, the White House has been working to get voluntary commitments from major companies on AI safety measures. Several leading AI companies have made non-binding commitments to invest in cybersecurity and discrimination research and to work on watermarking AI-generated content.

Elizabeth Kelly, director of the AI Safety Institute, said in a statement that the new agreements are "just the beginning, but they are an important milestone in our efforts to help responsibly govern the future of AI."

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
ActivityInformation

Baidu and NVIDIA jointly launched the "2024 Baidu Search·Cultural Intelligence Innovation Competition" with a total prize of nearly 500,000 yuan

2024-8-30 9:32:05

Information

Musk's AI company xAI was accused of using gas turbines in data centers without permission, polluting air quality

2024-8-30 9:34:13

Search