OpenAI pledges to give US AI Safety Institute early access to next AI model

recently,OpenAI CEO Sam Altman revealed that OpenAI is working with an AI research group aimed at assessing and addressing risks in AI platforms.USAThe federal government is working with the National AI Safety Institute to provide it with early access to the next major generative AI model forSecurity Testing.

OpenAI pledges to give US AI Safety Institute early access to next AI model

The announcement, which Altman made Thursday evening at X, was light on details. But the move, along with a similar agreement reached in June with the U.K.’s AI safety body, appears intended to counter the notion that OpenAI has relegated its AI safety work to the back burner as it pursues more powerful generative AI techniques.

Previously, OpenAI effectively disbanded a division dedicated to developing controls to prevent “superintelligent” AI systems from getting out of control in May. Reports indicate that OpenAI abandoned the team’s safety research in favor of launching new products, which ultimately led to the resignation of the team’s two co-leaders, Jan Lake (who now leads safety research at AI startup Anthropic) and OpenAI co-founder Ilya Sutskevi (who founded his own safety-focused AI company, Safe Superintelligence Inc.).

Faced with growing criticism, OpenAI said it would remove the nondisparagement clause that restricted employees from reporting, set up a safety committee, and use 20% of computing resources for safety research. (The disbanded safety team had been promised 20% of OpenAI computing resources for work, but ultimately did not receive them.) Altman recommitted 20% of computing resources and confirmed that OpenAI abolished the nondisparagement clause for new and existing employees in May.

However, the moves have failed to quell skepticism from some observers, especially after OpenAI made up its safety committee entirely of company insiders and recently reassigned a top AI safety executive to another department.

Five senators, including Hawaii Democrat Brian Schatz, questioned OpenAI’s policies in a recent letter to Altman. OpenAI Chief Strategy Officer Jason Kwon responded today to the letter, saying OpenAI is “committed to implementing rigorous safety protocols at every stage of our process.”

The timing of OpenAI’s agreement with the National AI Safety Institute seems a bit suspicious, given OpenAI’s support for the Future of Innovation Act earlier this week. If passed, the bill would authorize the safety institute to set standards and guidelines for AI models as an administrative agency. This series of moves could be seen as an attempt to control regulation, or at least OpenAI’s influence on AI policy making at the federal level.

It is worth mentioning that Altman is a member of the U.S. Department of Homeland Security's Artificial Intelligence Safety and Security Commission, which provides advice on "safe and reliable development and deployment of AI" in critical U.S. infrastructure. Moreover, OpenAI's spending on federal lobbying has increased significantly this year, spending $800,000 in the first six months of 2024, compared to only $260,000 for the whole of 2023.

The National AI Safety Institute is part of the Commerce Department’s National Institute of Standards and Technology and is working with a coalition of companies including Anthropic and major tech companies including Google, Microsoft, Meta, Apple, Amazon and Nvidia.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Huawei Pura 70 series mobile phones launch AI image expansion function

2024-8-2 9:30:55

Information

AI enters the fast food industry: Taco Bell will test AI ordering system on a large scale

2024-8-2 9:37:22

Search