OpenAI says its latest GPT-4o model has a “medium” risk rating

recent,OpenAI Released their latest GPT-4o System Card, a detailed research document that describes the safety measures and risk assessments the company does before launching a new model.

The GPT-4o model was officially launched in May this year. Before the release, OpenAI hired an external team of security experts to conduct a risk assessment. This "red team" test is a fairly common practice. They mainly focused on the risks that the model might bring, such as generating unauthorized voice clones, obscene and violent content, or repeated copyrighted audio clips.

OpenAI says its latest GPT-4o model has a “medium” risk rating

According to OpenAI's own framework,Researchers rate GPT-4o's overall risk as "moderate". This risk level is based on the highest risk rating in four main categories: cybersecurity, biothreat, persuasion, and model autonomy. All categories except persuasion were considered low risk. The researchers found that some texts generated by GPT-4o were more persuasive in influencing readers' opinions than text written by humans, although they were not more persuasive overall.

Lindsay McCallum Rémy, a spokesperson for OpenAI, said that the system card includes a prepared evaluation created by internal teams and external testers. External teams include Model Evaluation and Threat Research (METR) and Apollo Research listed on the OpenAI website, which focus on the evaluation of artificial intelligence systems. This is not the first time OpenAI has released a system card. The previous GPT-4, GPT-4 Vision Edition, and DALL-E3 have also undergone similar tests and released relevant research results.

But the release of the system card comes at a critical moment, as OpenAI has faced constant criticism from internal employees and state senators questioning its safety standards. Minutes before the release of the GPT-4o system card, an open letter co-signed by Massachusetts Senator Elizabeth Warren and Representative Lori Trahan called on OpenAI to provide answers on how to deal with whistleblowers and safety reviews. Many of the safety issues mentioned in the letter include the brief dismissal of CEO Sam Altman in 2023 due to concerns from the board of directors and the departure of a security executive who claimed that "safety culture and processes are suppressed by beautiful products."

In addition, OpenAI released a powerful multimodal model just before the US presidential election, which obviously poses the potential risk of misinformation or being exploited by malicious actors. Although OpenAI hopes to prevent abuse through testing in real scenarios, the public is increasingly calling for its transparency. In California in particular, State Senator Scott Wiener is pushing a bill to regulate the use of large language models, including requiring companies to bear legal responsibility when their AI is used for harmful purposes. If the bill passes, OpenAI's cutting-edge models must follow the risk assessment required by state law before they can be released to the public.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Amazon's partnership with Anthropic AI under investigation by UK competition regulator

2024-8-10 9:37:14

Information

Hugging Face acquires Seattle data storage startup XetHub

2024-8-10 9:39:28

Search