Meta releases cutting-edge AI framework: it will pause development of AI systems it deems too risky

February 4 News.Meta Company CEO Mark Zuckerberg has promised to one day bring generalized artificial intelligence (AGI) -- i.e., AI capable of performing any task a human can perform -- is openly available to the general public. However, according to Meta's newly released policy document, the company may limit the release of its internally developed high-performance AI systems under certain circumstances.

Meta releases cutting-edge AI framework: it will pause development of AI systems it deems too risky

The document, called the Frontier AI Framework, identifies two types of AI systems that Meta considers to be too risky to release: "High Risk" systems and "Critical Risk" systems. According to Meta's definition, both types of systems have the ability to facilitate cyber, chemical, and biological attacks, with the difference being that "critical risk" systems could lead to "catastrophic consequences that cannot be mitigated in the proposed deployment environment," while "high risk" systems could lead to "catastrophic consequences that cannot be mitigated in the proposed deployment environment," and "high risk" systems could lead to "catastrophic consequences that cannot be mitigated in the proposed deployment environment. "High-risk" systems, while potentially making attacks easier to carry out, are not as reliable or stable as "critical risk" systems.

1AI notes that Meta cites specific attack scenarios in its document, such as "end-to-end automated intrusion into enterprise-class environments protected by best practices" and "proliferation of high-impact biological weapons. Meta acknowledges that the list of potentially catastrophic events in its document is far from exhaustive, but these are the ones Meta considers "most urgent" and most likely to be triggered as a direct result of the release of a powerful AI system.

Notably, Meta does not rely solely on a single empirical test when assessing systemic risk, but rather combines the input of internal and external researchers and reviews by "high-level decision makers".Meta states that this is because the current assessment science is not yet "robust enough to provide clear quantitative metrics for determining systemic risk". Meta states that this is because the current assessment science is not yet "robust enough to provide clear quantitative indicators for determining systemic riskiness".

If Meta determines that a system is "high risk".The company will restrict access to the system internally and will not release the system until risk mitigation measures have been implemented to reduce the risk to a "moderate level"If the system is determined to be a "critical risk". If the system is determined to be a "critical risk", Meta will take unspecified security measures.Prevent illegal access to the system and suspend development until the system can be made less dangerous.

Meta says its Frontier AI Framework will evolve as the AI field changes. Meta has previously promised an early release of the framework at this month's AI Action Summit in France, in what appears to be a response to criticism of the company's "open" approach to developing systems. The framework appears to be a response to criticism of the company's "open" approach to developing systems, as Meta has been adopting a strategy of making its AI technology publicly available, though not open source in the usual sense, in contrast to companies such as OpenAI, which have opted to put their systems behind APIs.

In the document, Meta stated, "We believe that in considering the pros and cons of developing and deploying advanced AI, it is possible to make the technology available to society in a way that preserves the benefits of the technology to society while maintaining an appropriate level of risk."

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

OpenAI urgently releases o3-mini, CEO Altman admits mistake and calls DeepSeek "very good"

2025-2-1 17:12:50

Information

SoftBank to spend $3 billion a year on OpenAI technology, and the two form a joint venture

2025-2-4 11:04:32

Search