Key points of the world's first legislation regulating artificial intelligence

What do I need to know about the EU AI Law? (Part 2)

In our previous article, we introduced you to the European Union's (EU) artificial intelligence (AI) law, the first of its kind in the world, which regulates this technology by establishing a classification of systems based on the level of risk they may pose to citizens' rights and freedoms. This classification is crucial for determining the obligations that AI developers and users must fulfill. So, if the way the regulation categorizes risk is so important, shall we dive deeper into it?
 

What levels of risk does the law establish?

1. Unacceptable Risk: AI systems considered to be of unacceptable risk are banned because they pose a clear threat to fundamental rights. These include technologies such as mass surveillance without judicial oversight or the use of subliminal manipulation techniques that could cause physical or psychological harm.

Example: A facial recognition system used to continuously monitor citizens' activities in public spaces without their explicit consent and without appropriate legal authorization.
 

2. High Risk: High-risk AI systems have the potential to seriously affect individual rights and freedoms, hence they must comply with strict requirements for transparency, safety, and supervision. These systems are used in critical sectors such as healthcare, education, security, or the judicial system.

Example: Algorithms used to evaluate loan applications or assess recidivism risk in the judicial system. These systems must undergo independent evaluations to ensure they are non-discriminatory and function correctly.
 

3. Limited Risk: Limited-risk systems have less potential to cause harm but still require some supervision and transparency measures. They must provide clear information to users about their nature and function, as well as ensure that the results are understandable and auditable.

Example: AI applications used in customer service that provide automated responses to frequently asked questions. Although they do not make critical decisions, they must clearly inform users when they are interacting with AI.
 

4. Minimal or Negligible Risk: AI systems that pose minimal or negligible risk have fewer regulatory requirements since their use does not present a significant threat to individuals' rights and freedoms. These systems are often subject to general safety and privacy regulations.

Example: Recommendation algorithms on music or video streaming platforms that suggest content based on users' preferences. Although they are important for the user experience, they do not pose a risk to fundamental rights.


While existing laws have offered some degree of protection, they were insufficient to face future challenges, and this time the EU took action to ensure that regulations do not lag far behind technology. The AI Act guarantees that European citizens can trust in everything AI has to offer. Although most AI systems have limited or no risk, some do carry dangers that could lead to undesirable outcomes. This is precisely what the regulation aims to prevent. 

We invite you to read the full text to learn every last detail.
 

And what’s next?

We all know that the world's first regulation governing AI has been in effect since August 1 and will be applied progressively over the next two years. But what does this mean? Let’s review it! 

EU member states have one year (until August 2, 2025) to designate the national authorities that will oversee the implementation of the regulation. At the European level, the AI Office of the Commission will be the main body responsible for enforcing the AI Act, as well as the authority responsible for enforcing rules related to general-purpose AI models. Additionally, there will be other advisory bodies to assist the AI Office. 

To ease the transition period before full implementation in two years, the Commission has launched the AI Pact, inviting AI developers to voluntarily adopt the key obligations of the AI Act before the legal deadlines. Do you think they will take the step?


For more information about the regulation, we recommend the following readings:

Facebook Twitter LinkedIn WhatsApp
We use our own and third-party cookies to improve our services and technical reasons, to improve your browsing experience, to store your preferences and, optionally, to show you advertising related to your preferences by analyzing your browsing habits. We have included some configuration options that allow you to tell us exactly which cookies you prefer and which you don't. Press ACCEPT to consent to all cookies. Press CONFIGURATION to decide the options you prefer. To obtain more information about our cookies, access our Cookies Policy here: More information
Accept Decline Manage Cookies