OpenAIRecently, a major announcement was made that the companyBoard of DirectorsNow have the power to vetoUltramandecisions, especially regardingGPT-5This is a follow-up to the internal strife last month, when President Greg Brockman and former Chief Scientist Ilya Sutskever resigned from the board of directors, and the company's management and board of directors were completely separated, allowing the company's management to make decisions while the board of directors has the right to overturn the decisions.
Under the new security framework, the company has set up a dedicated security advisory team to report to management and the board of directors monthly to ensure that decision makers are fully aware of the abuse of existing models such as ChatGPT. In addition, the company has set a series of restrictions on its own technology development to ensure that the model security score meets the standard before entering the next development stage, and released the "Frontier Risk Prevention Framework".
In order to more comprehensively deal with AI risks at different time scales, OpenAI has established three security teams, responsible for risks in the present, near future, and distant future. These teams cover four major security categories: cybersecurity, CBRN risks, persuasion risks, and model autonomy risks. For cutting-edge models under development, the company will track and evaluate security risks in these four areas, and grade them into "low, medium, high, and major dangers" through a "scorecard."
It is worth noting that the company will also conduct regular security exercises, stress test the business and the company's own culture, and invite third parties to form a red team to conduct independent assessments of the model. This series of measures is aimed at ensuring the security of the model and taking corresponding mitigation measures in risk assessment.
Finally, OpenAI revealed that it has launched a new study to measure how risks evolve as models scale, trying to address the "unknown unknowns." This shows that the company has taken a serious approach to the catastrophic risks that AI may bring, and is committed to predicting and preventing potential problems in advance.