en
                    array(2) {
  ["de"]=>
  array(13) {
    ["code"]=>
    string(2) "de"
    ["id"]=>
    string(1) "3"
    ["native_name"]=>
    string(7) "Deutsch"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    int(0)
    ["default_locale"]=>
    string(5) "de_DE"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "de"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(6) "German"
    ["url"]=>
    string(97) "https://www.statworx.com/content-hub/blog/der-ai-act-kommt-diese-risikoklassen-sollte-man-kennen/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/de.png"
    ["language_code"]=>
    string(2) "de"
  }
  ["en"]=>
  array(13) {
    ["code"]=>
    string(2) "en"
    ["id"]=>
    string(1) "1"
    ["native_name"]=>
    string(7) "English"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    string(1) "1"
    ["default_locale"]=>
    string(5) "en_US"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "en"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(7) "English"
    ["url"]=>
    string(109) "https://www.statworx.com/en/content-hub/blog/the-ai-act-is-coming-these-are-the-risk-classes-you-should-know/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/en.png"
    ["language_code"]=>
    string(2) "en"
  }
}
                    
Contact
Content Hub
Blog Post

The AI Act is here – These are the risk classes you should know

  • Expert Fabian Müller
  • Date 05. August 2024
  • Topic Artificial IntelligenceHuman-centered AIStrategy
  • Format Blog
  • Category Management
The AI Act is here – These are the risk classes you should know

At the beginning of August, the AI Act of the European Union came into effect. The regulation aims to create a unified legal framework that governs the development and use of AI technologies in the EU. The world’s first comprehensive AI law is intended to ensure that AI systems are used safely in the EU and that risks are minimized. This brings extensive obligations for companies that develop and operate high-risk AI systems. We have compiled the most important information about the AI Act.

Legislation with a global impact

A unique feature of the law is the so-called market location principle: Accordingly, all companies that offer, operate, or have AI-generated output used within the EU are affected by the AI Act, regardless of their own location.

Artificial intelligence is defined as machine-based systems that can autonomously make predictions, recommendations or decisions and thus influence the physical and virtual environment. This applies, for example, to AI solutions that support the recruitment process, predictive maintenance solutions and chatbots such as ChatGPT. The legal requirements that different AI systems must fulfill vary greatly depending on their classification into risk classes.

Excluded from the regulation are AI systems developed for research or military purposes, made available as open-source systems, or used by authorities in law enforcement or the judiciary. Additionally, the use of AI systems for purely private purposes is exempt from the law.

The risk class determines the legal requirements

At the core of the law is the classification of AI systems into four risk categories. The higher the risk category, the greater the legal requirements that must be met.

The risk categories include:

  • low,
  • limited,
  • high,
  • and unacceptable risk.

These classes reflect the extent to which artificial intelligence jeopardizes European values and fundamental rights. AI systems that belong to the “unacceptable risk” category are prohibited by the AI Act.

Particularly comprehensive requirements apply to high-risk systems, which are divided into requirements for “Providers” (suppliers) and “Deployers” (users), “Distributors” (dealers), and “Importers” (importers).

We will explain which AI systems fall into which risk category and the associated requirements below.

Ban on social scoring and biometric remote identification

Some AI systems have a significant potential to violate human rights and fundamental principles, which is why they are categorized as “unacceptable risk”. These include:

  • Real-time based remote biometric identification systems in publicly accessible spaces (exception: law enforcement agencies may use them to prosecute serious crimes);
  • Biometric remote identification systems in retrospect (exception: law enforcement authorities may use them to prosecute serious crimes);
  • Biometric categorization systems that use sensitive characteristics such as gender, ethnicity or religion;
  • Predictive policing based on so-called profiling – i.e. profiling based on skin color, suspected religious affiliation and similarly sensitive characteristics – geographical location or previous criminal behavior;
  • Systems for emotion recognition in the workplace and educational institutions, except for medical and safety reasons;
  • Arbitrary extraction of biometric data from social media or video surveillance footage to create facial recognition databases;
  • Social scoring leading to disadvantage in social contexts;
  • AI systems that exploit the vulnerabilities of a specific group of people due to their age, disability, or a particular social or economic situation, which can lead to behaviors causing physical or psychological harm;
  • AI systems that use manipulative, deceptive, and subliminal techniques to maliciously influence decisions.

These AI systems will be banned on the European market under the AI Act with a deadline until February 2025.

Numerous requirements for AI with risks to health, safety and fundamental rights

The “high risk” category includes all AI systems that are not explicitly prohibited but nevertheless pose a high risk to health, safety or fundamental rights. The following areas of application and use are explicitly mentioned:

  • Biometric and biometric-based systems that do not fall into the “unacceptable risk” risk class;
  • Management and operation of critical infrastructure;
  • Education and training;
  • Access and entitlement to basic private and public services and benefits;
  • Employment, human resource management and access to self-employment;
  • Law enforcement;
  • Migration, asylum and border control;
  • Administration of justice and democratic processes

However, an exception applies to these systems if either the system is intended to improve or correct the outcome of a previously completed human activity, or if it is designed to perform a very narrowly defined procedural task. This justification must be documented and made available to authorities upon request.

AI systems that fall under the EU’s product safety regulations listed in Annex I of the AI Act are also considered high-risk systems. This includes, for example, AI systems used as safety components in aviation, toys, medical devices, or elevators.

Providers of high-risk AI systems are subject to comprehensive legal requirements that must be implemented before commissioning and adhered to throughout the entire AI lifecycle:

  • Assessment of risks and effects on fundamental and human rights
  • Quality and risk management
  • Data governance structures
  • Quality requirements for training, test and validation data
  • Technical documentation and record-keeping obligations
  • Fulfillment of transparency and provision obligations
  • Human supervision, robustness, cyber security and accuracy
  • Declaration of conformity incl. CE marking obligation
  • Registration in an EU-wide database
  • Instructions for use for downstream deployers

In contrast to providers, who develop and market AI systems, deployers are generally operators who use third-party systems commercially. Deployers are subject to less stringent regulations than providers: They must use the high-risk AI system according to the provided instructions, carefully monitor input data, oversee the system’s operation, and keep logs.

Importers and dealers of high-risk AI systems must ensure that the provider has met all the measures required by the AI Act and recall the system if necessary. It is also important to note that any deployer, dealer, or importer is considered a provider under the AI Act if they market or operate the system under their own name or brand. This also applies if significant changes are made to the system.

AI with limited risk must comply with transparency obligations

AI systems that interact directly with humans fall into the “limited risk” category. This includes emotion recognition systems, biometric categorization systems, as well as AI-generated or altered audio, image, video, or text content. For these systems, which include, for example, chatbots, the AI Act mandates that consumers be informed about the use of artificial intelligence and that AI-generated output be declared as such.

No legal requirements for AI with low risk – but AI education is mandatory for everyone

Many AI systems, such as predictive maintenance or spam filters, fall into the “low risk” category. These systems are not subject to specific regulations under the AI Act.

For all providers and deployers of AI systems, regardless of their risk category, the EU dedicates an entire article to the promotion of AI competencies: Article 4 mandates regular AI training and further education for individuals who come into contact with AI systems.

GPAI models are regulated separately

The regulation for General Purpose AI models, which can perform a wide range of different tasks, was included in the AI Act in response to the emergence of AI models such as GPT-4. It concerns the developers of these models, such as OpenAI, Google, or Meta. Depending on whether a model is assessed as a “Systemic Risk” and whether it is open source and freely accessible, developers are subject to varying degrees of stringent obligations. Large models trained with over 10^25 FLOP computational power must meet numerous and strict requirements, such as technical documentation and risk evaluations, as they are classified as a “Systemic Risk.”

These rules also apply to the latest generation of AI models, including GPT-4o, Llama 3.1, or Claude 3.5.

Companies should start preparing for the AI Act now

Companies now have up to three years to comply with EU regulations. However, the ban on systems with unacceptable risk and the obligation for AI education will come into effect in just six months. To ensure that processes and AI systems in your company are compliant with the law, the first step is to assess the risk class of each individual system. If you are not yet sure which risk classes your AI systems fall into, we recommend our free AI Act Quick Check. It helps you to assess the risk class. If you have any further questions about the AI Act, please feel free to contact us at any time.

More information:

Sources:

 

Read next …

 How the AI Act will change the AI industry: Everything you need to know about it now
 Unlocking the Black Box – 3 Explainable AI Methods to Prepare for the AI Act

… and explore new

How to Get Your Data Science Project Ready for the Cloud
 How to Scan Your Code and Dependencies in Python

Julia Rettig Fabian Müller

Learn more!

As one of the leading companies in the field of data science, machine learning, and AI, we guide you towards a data-driven future. Learn more about statworx and our motivation.
About us