OpenAI announced: If GPT-5 is too dangerous, the board of directors has the right to veto Altman's decision

OpenAIRecently, a major announcement was made that the companyBoard of DirectorsNow have the power to vetoUltramandecisions, especially regardingGPT-5This is a follow-up to the internal strife last month, when President Greg Brockman and former Chief Scientist Ilya Sutskever resigned from the board of directors, and the company's management and board of directors were completely separated, allowing the company's management to make decisions while the board of directors has the right to overturn the decisions.

OpenAI announced: If GPT-5 is too dangerous, the board of directors has the right to veto Altman's decision

Under the new security framework, the company has set up a dedicated security advisory team to report to management and the board of directors monthly to ensure that decision makers are fully aware of the abuse of existing models such as ChatGPT. In addition, the company has set a series of restrictions on its own technology development to ensure that the model security score meets the standard before entering the next development stage, and released the "Frontier Risk Prevention Framework".

In order to more comprehensively deal with AI risks at different time scales, OpenAI has established three security teams, responsible for risks in the present, near future, and distant future. These teams cover four major security categories: cybersecurity, CBRN risks, persuasion risks, and model autonomy risks. For cutting-edge models under development, the company will track and evaluate security risks in these four areas, and grade them into "low, medium, high, and major dangers" through a "scorecard."

It is worth noting that the company will also conduct regular security exercises, stress test the business and the company's own culture, and invite third parties to form a red team to conduct independent assessments of the model. This series of measures is aimed at ensuring the security of the model and taking corresponding mitigation measures in risk assessment.

Finally, OpenAI revealed that it has launched a new study to measure how risks evolve as models scale, trying to address the "unknown unknowns." This shows that the company has taken a serious approach to the catastrophic risks that AI may bring, and is committed to predicting and preventing potential problems in advance.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Stability AI launches commercial membership program, charging for commercial use of AI models

2023-12-20 9:45:43

Information

TomTom and Microsoft work together to develop an in-car AI conversational assistant that can complete all operations with just one conversation interaction

2023-12-20 9:48:00

Search