EU AI law now in effect: AI applications are classified into different risk levels

August 1, 2024European UnionArtificial Intelligence ActThe EU’s AI governance law officially came into effect, marking the arrival of a new era of regulation. This groundbreaking regulation will set clear standards and compliance requirements for AI applications within the EU. The implementation of this law means that the EU has taken an important step in AI governance and also reflects the EU’s risk management-based approach.

The bill sets staggered compliance deadlines for different types of AI developers and applications. While most provisions won’t fully apply until mid-2026, some key provisions will come into force six months later. These include a ban on the use of a few AI technologies in certain circumstances, such as the use of remote biometrics by law enforcement in public places.

EU AI law now in effect: AI applications are classified into different risk levels

Source Note: The image is generated by AI, and the image is authorized by Midjourney

According to the EU's risk management approach, AI applications are classified into different risk levels. Most everyday applications are classified as low risk or no risk and are therefore not covered by this regulation. However, some applications that may cause potential harm to individuals and the public are classified as high risk, such as biometrics, facial recognition, and AI-based medical software. Companies developing these high-risk AI technologies must ensure that their products meet strict risk and quality management standards, including conducting comprehensive consistency assessments and potentially being audited by regulators.

In addition, technologies classified as "limited risk", such as chatbots, will also need to meet some transparency requirements to ensure users understand their use and prevent potential misleading or fraudulent use.

The AI Act also introduces a graduated penalty system. Companies that violate the ban on using high-risk AI applications will face severe penalties, with a maximum fine of 7% of the company's global annual turnover. Other violations, such as failure to fulfill risk management obligations or providing incorrect information to regulators, will be subject to varying degrees of financial penalties, up to 3% or 1.5% of global annual turnover.

The EU has also developed special regulations for technologies known as general artificial intelligence (GPAI). Most GPAI developers will be required to meet light transparency obligations, including providing summaries of training data and complying with copyright rules. Only the most powerful GPAI models that are classified as potentially posing systemic risks will require additional risk assessment and mitigation measures.

With the entry into force of the AI Act, the EU's AI ecosystem has entered a new chapter. Developers, businesses and the public sector now have a clear compliance roadmap that will promote innovation and development in the AI industry while ensuring its application meets ethical and safety standards.

However, challenges remain. Some specific regulations, especially requirements for high-risk AI systems, are still being developed. European standards bodies are actively involved in this process and are expected to complete the development of relevant standards by April 2025.

 

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Cook said Apple AI will drive users to upgrade and some AI features will be launched within the year

2024-8-3 9:07:09

Information

Nature: Google AI papers have an absolute advantage in citations, while Tencent and Alibaba are among the top ten in the world

2024-8-3 9:09:11

Search