AI ACT Explained: Understanding the World’s First Comprehensive AI Regulation

September 23, 2024

Over the past decade, significant advances in Artificial Intelligence (AI) have had a significant impact on many industries. Today, AI technologies are being widely adopted, both by online platforms and across sectors, including manufacturing, healthcare, finance, and retail. Globally, governments have started integrating AI solutions to enhance productivity, ranging from simple solutions to the deployment of chatbots and advanced automated decision systems.

The realization that AI is more than a passing trend, heightened by the rise of generative Artificial Intelligence over the past year, has motivated many governments and international organizations to develop AI-specific regulations. In this context, the European Union is playing a pioneering role by developing and adopting the AI Act, an important legal instrument that sets standards for the development, commercialization, and use of AI systems. 

What Risks Are Listed by the AI Act and How Are These Risks Being Handled? 

The AI Act seeks to address factors involving biased human opinions in AI systems, an incredibly difficult task that requires more awareness. To satisfy the GDPR requirements, organizations must be extremely careful about the algorithmic models that form the basis of the AI applications, as well as the AI models’ data input. Overcoming erroneous opinions is not only a technical challenge; it requires a separate team that focuses on identifying these opinions, both in people and AI. 

It is worth noting that the AI Law responds to the fact that not all applications or implementations of AI pose similar risks. Thus, the law splits risks into four different categories:

Unacceptable 

It is prohibited to propose applications with elements of subliminal perception, manipulation techniques, or social evaluation instruments introduced by authorities. The operation of active recognition systems used by the police in activities open to the public is also banned.

High Risk 

This type of application is used in areas such as transport, education, and employment. Organizations planning to implement a high-risk AI system within the territory of the European Union are legally bound to conduct a preliminary conformity assessment and meet a great number of system protection standards. However, regarding the precautionary measures, the European Commission will have to create and maintain a public registry, while vendors will be required to provide details of the high-risk AI system to involved parties in a bid to enhance transparency. 

Limited Risk

This category contains AI systems that must adhere to specific system transparency obligations. For example, every person who tries to chat with a chatbot should receive a message that they are speaking with a program, which offers the option of choosing between continued conversation or a dialog with a human operator. 

Minimized Risk 

This kind of application is common and represents most of the Artificial Intelligence applications in use today. Some of them are spam filters, video games AI, and inventory management AI.

What Defines a High-Risk AI System?

Alongside a clear definition of “high-risk”, the legislation introduces a robust method for determining which AI systems are high-risk. The risk assessment is based on the intended purpose of the AI system, in line with EU legislation. This indicates that the level of risk is assessed according to the role of the AI system, its specific purpose, and how it is used.

A list annexed to the law sets out what the EU sees as high-risk scenarios, and the Commission is committed to keeping this list updated and relevant. Systems that perform limited and procedural functions, improve the results of existing human activities, and do not directly affect human decisions or actions are not considered high-risk. However, an AI system is considered high-risk if it profiles individuals.

Some examples of high-risk use cases as defined in the IA Law:

– Certain critical infrastructure, e.g., road traffic, water, gas, heating, and electricity supply;

– Vocational education and training, e.g., to assess learning outcomes to guide the learning process and to monitor cheating;

– Employment, worker management and access to self-employment, e.g., to place job advertisements, screen and filter job applications, and to assess candidates;

– Access to essential private and public services and benefits (e.g., health care), assessing the creditworthiness of individuals and risk assessment and pricing of life and health insurance;

– Certain systems used in law enforcement, border control, judiciary work and democratic processes;

– Evaluation and classification of emergency calls;

– Biometric systems for the identification, classification, and recognition of emotions (outside the prohibited categories);

– The recommendation systems of very large online platforms are not included, since they are already covered by other legislation (DMA/DSA).

How are General-Purpose AI Models Regulated?

General-purpose AI models, including large generative AI models, can be used for a multitude of tasks. A vendor wishing to rely on a general-purpose AI model must ensure that its system is secure and compliant with the AI Act.

Thus, the AI Act requires that providers of such models provide essential information to the users of the systems, thereby facilitating a better understanding of how the models work. Model providers must take steps to ensure copyright compliance during the training of models. Moreover, some models may pose systemic risks due to advanced capabilities or extensive use. At present, general AI models trained with a total computational capacity of more than 10^25 FLOPs are considered to have systemic risks, as higher computational capacity gives models increased efficiency. The AI Authority (created within the Commission) has the prerogative to adjust this threshold as technology evolves and may also classify other models as systemic based on additional criteria such as the number of users or the model’s autonomy level.

Vendors of systemically risky models are obliged to identify and minimize risks, report major incidents, perform advanced testing and evaluation of models, ensure cybersecurity, and disclose information on the energy consumption of their models. They should work with the European AI Office to develop codes of conduct, which will serve as the main tool for defining standards in collaboration with other experts. A scientific committee will play a key role in monitoring general AI models.

Who Does the AI Law Apply To?

The legal framework will apply to both public and private actors inside and outside the EU, as long as the AI system is placed on the EU market or if its use affects individuals in the EU. It will cover both providers (e.g., a developer of CV screening tools) and high-risk AI users (e.g., a bank buying this screening tool).

Certain obligations for the providers of general-purpose AI models include large generative AI models. Providers of free and open-source models are exempt from most of these obligations, except providers of general-purpose AI models that pose systemic risks. The obligations do not apply to the research, development and prototyping activities that precede market launch, and the regulation also does not apply to AI systems that are used exclusively for military, defense, or national security purposes, regardless of the type of entity carrying out these activities.

The European Union’s strategy regarding AI focuses on upholding excellence and trust. The goal is to promote research and development, ensure safety, and safeguard rights. It is crucial for individuals and businesses to reap the benefits of AI while feeling secure and protected.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later