AI Act
In 2024, the European Union committed to regulate the use of artificial intelligence through the AI Act. This pioneering initiative aims to establish a legal framework for the general use of AI.
The AI Act in a nutshell
The AI Act, proposed by the European Commission, is an ambitious initiative. Its goal? To create an environment in which AI innovation can flourish while protecting citizens. Nothing less.
In other words, it is about ensuring that AI systems respect fundamental EU values such as privacy, transparency and non-discrimination.
Risk classification and obligations
The AI Act classifies AI systems into four risk categories: unacceptable, high, limited and minimal.
- Unacceptably risky systems, such as those using subliminal manipulation techniques, are prohibited.
- High-risk systems, such as those used in critical infrastructure or utilities, are subject to strict obligations, including compliance assessments and regular audits.
- Limited-risk systems (such as AI systems for generating or manipulating images) must inform users that they are interacting with an AI system.
- Minimal risk systems are similar to those used for video games or spam filters. They are not subject to any specific obligations, but vendors are strongly encouraged to follow a defined code of conduct (with a particular focus on environmental sustainability and accessibility for people with disabilities).
These measures are designed not only to prevent potential abuses, but also to reinforce public confidence in AI.
WHEN WILL THE AI ACT BE IMPLEMENTED?
The legislative draft, originally introduced in April 2021, has evolved through several phases of consultation and review. These discussions clarified the requirements for general purpose AI systems and defined risk categories for AI applications. This entire process culminated on August 1, 2024, when the AI Act came into force in the European Union.
It will be implemented gradually.
- February 2025: implementation of Chapters I (general provisions) and II (prohibited AI practices).
- August 2025: implementation of Chapters III (notifying authorities), V (general purpose AI models), VII (governance issues) and XII (applicable sanctions).
- August 2026: implementation of the remainder of the regulation, with the exception of obligations relating to high-risk AI systems.
- August 2027: application of the entire Regulation.
WHAT ARE THE CONSEQUENCES OF THE AI ACT?
Impact on innovation and competitiveness
One of the main challenges of the IA Act? Finding the balance between regulation and innovation. Because while AI presents dangers, it is also a veritable breeding ground for opportunities and innovation. As a result, industry players are wondering how this regulation could affect the EU’s competitiveness on the global stage.
However, in the long term, a clear regulatory framework could, on the contrary, attract investment and strengthen the EU’s position as a leader in ethical AI. We will keep you informed…
Mixed reactions
The AI Act triggered mixed reactions.
- On the one hand, human rights advocates and consumer protection groups welcome the EU’s efforts to protect citizens from the potential risks of AI.
- On the other hand, some tech companies express concern about the complexity and cost of complying with the new regulations.
In any case, the European Commission remains optimistic that the AI Act will create an enabling environment for responsible, human-centered AI. By setting strict rules, the EU hopes not only to protect its citizens, but also to influence the global regulation of AI. A great project.
Complex Application
The application of the AI Act will not be without difficulties:
- National authorities will need to be trained and equipped to monitor and enforce the new rules;
- Ensuring consistency in the application of the regulation across different EU Member States will be crucial;
- International cooperation will be essential, as many tech companies operate on a global scale.
Lastly, but not least, the challenge lies in ensuring that the regulation remains relevant in the face of rapidly evolving technology. AI is a constantly evolving field, and laws must be flexible enough to adapt to future innovations while maintaining a high level of protection.
AND IN OTHER PARTS OF THE WORLD?
Let’s conclude by taking a step back: the AI approach adopted by the European Union remains very different from that of other regions of the world. In the United States, for example, AI regulation is more fragmented and is often left to the initiative of companies. In China, the focus is on rapid innovation, sometimes at the expense of privacy protection.
The EU’s approach, based on the protection of fundamental rights, could become a model for other countries seeking a balance between innovation and ethics…
ConclusiOn
Ultimately, the AI Act is not just a matter of legal compliance—it is also an opportunity for Europe to shape the future of technology in a way that reflects its values and priorities. If it rises to the challenges and seizes the opportunities presented by this regulation, the EU can play a leading role in defining the AI of the future.
And if you’re interested in artificial intelligence in cybersecurity, check out these two articles: