Recently, a groupUSAsenatorTowardsOpenAICEO Sam Altman sent a critical letter requiring him to disclose detailed information about the company's safety measures and working conditions by August 13, 2024. The request comes as media reports have revealed some of OpenAI's potential safety risks, including the departure of multiple AI safety researchers, security vulnerabilities, and employee concerns.
Source Note: The image is generated by AI, and the image is authorized by Midjourney
The letter was initiated in response to revelations from former employees who harshly criticized OpenAI’s safety measures in AI development. OpenAI’s new AI model GPT-4o was reportedly completed safety testing in just one week, and this accelerated testing approach has raised concerns among security experts. The new model was exposed to be able to generate malicious content, such as instructions for making bombs, with simple prompts.
In the letter, the senators emphasized that the public needs to trust OpenAI to remain safe when developing its systems. This includes the integrity of corporate governance, the standardization of security testing, the fairness of hiring practices, compliance with public commitments, and the enforcement of cybersecurity policies. They pointed out that the security commitments OpenAI made to the Biden administration must be fulfilled in a concrete manner.
In response to the senators’ requests, OpenAI has released several statements through social media platforms, mentioning the recently formed Safety and Security Committee, the latest progress on Level 5 AGI, the Readiness Framework, and the revision of the much-criticized employee contracts. OpenAI hopes that through these measures, it can demonstrate its improvements in safety and governance.