An overview of the EU’s Artificial Intelligence Act

What is the EU AI Act

The European Union introduced the Artificial Intelligence Act (AI Act), a regulatory proposal concerning artificial intelligence within the EU. Presented by the European Commission on April 21, 2021, this legislation is the inaugural comprehensive AI law globally. The proposed EU Artificial Intelligence Act seeks to categorize and oversee artificial intelligence applications by evaluating their potential for causing harm. This categorization involves four risk levels (“unacceptable,” “high,” “limited,” and “minimal”), along with an extra category dedicated to general-purpose AI.

How “AI System” is defined in the EU AI Act

The EU AI Act derives the definition of an “AI System” from the recently updated definition used by the Organisation for Economic Co-operation and Development (OECD). The objective of using the OECD definition is to provide a basis for international alignment and continuity with other laws and codes. The OECD definition defines an AI system as follows:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Key Objectives of the EU AI Act

  • To ensure that AI systems deployed on the EU market are safe and compliant with existing EU legislation.
  • To establish legal certainty, which will encourage investment and innovation in AI.
  • To improve governance and effectively implement EU law on AI systems’ fundamental rights and safety standards.
  • To help create a single market for lawful, safe, and trustworthy AI applications while avoiding market fragmentation.

Who will the AI Act Affect

The AI Act explicitly defines various AI actors, such as providers, deployers, importers, distributors, and product producers. All parties involved in AI model development, usage, import, distribution, or production will be held accountable. Furthermore, the AI Act applies to providers and users of AI systems outside the EU, such as Switzerland, assuming the system’s outcomes are intended for use within the EU.

AI systems classification for Risk-based approach as per the act

ClassificationDescriptionCompliance LevelUse case example
Unacceptable RiskProhibited because their usage endangers people’s safety, security, and fundamental rights.  ProhibitedThis includes employing AI tools for social grading, which may result in unfair treatment, emotional recognition systems in the workplace, biometric categorization to infer sensitive data, and predictive policing of persons, among other applications. Certain exemptions will apply.
High RiskPermitted, subject to compliance with AI Act regulations (including conformity evaluations before market release).  Conformity AssessmentThis includes using AI in recruitment, biometric identification surveillance systems, safety components (e.g., medical devices, automobiles), access to essential private and public services (e.g., creditworthiness, benefits, health and life insurance), and critical infrastructure security.  
Limited RiskPermitted, subject to strict transparency and disclosure requirements, provided the uses represent a low risk.TransparencySpecific AI systems interact directly with people (e.g., chatbots) and visual or audio “deepfake” content manipulated by an AI system.
Minimal RiskPermitted, with no additional AI Act criteria, for uses that pose minimal risk.Code of conductsBy default, all other AI systems that do not fit into the above categories (e.g., photo-editing software, product recommender systems, spam filtering software, scheduling software)

Timeline of EU AI Act Implementation

Time FrameDescription
February 2020European Commission publishes “White Paper on Artificial Intelligence”
October 2020A debate between EU leaders takes place.
21st April 2021The AI Act is officially proposed
6 December 2022European Council adopts the general orientation
9 December 2023The Council and Parliament agreed.
Q2–Q3 2024Expected entry into force.
Immediately after entry into force Q2-Q3 2024The European Commission will begin work to establish the AI Office (EU oversight body) while Member States make provisions to establish AI regulatory sandboxes.
During the implementation periodThe European Commission will launch the AI Pact, allowing firms to collaborate voluntarily with the Commission to achieve AI Act commitments ahead of legislative deadlines. (see the relevant section below regarding the AI Pact).
Six months after entry into force Q4 2024–Q1 2025Prohibitions will come into effect.
12 months after entry into force Q2–Q3 2025 (TBC)Some requirements for GPAI models may come into effect (details of which remain to be officially confirmed).
24 months after entry into force Q2–Q3 2026All other AI Act requirements will come into effect (e.g., those for high-risk AI systems).

* The AI Act is likely to take effect in 2025. Its applicability will be progressive. AI applications that present “unacceptable” risks should be banned six months after entry into force. Provisions for general-purpose AI should become applicable 12 months after entry into force, and the AI Act should be fully functional 24 months after entry into force.

Penalties for Non-Compliance with the EU AI Act

Non-Compliance CaseProposed Fine
Breach of AI Act prohibitions.Fines up to €35 million or 7% of total worldwide annual turnover (revenue), whichever is higher.
Non-compliance with obligations for high-risk AI system suppliers, authorized representatives, importers, distributors, users, and notified entities.Fines up to €15 million or 3% of total worldwide annual turnover (revenue), whichever is higher.
Respond to a request from notified bodies or national competent authorities with erroneous or misleading information.Fines up to €7.5 million or 1.5% of total worldwide annual turnover (revenue), whichever is higher.

Conclusion

The EU AI Act is a comprehensive legislative framework designed to meticulously regulate the deployment and operation of AI technologies within the European Union. By categorizing AI applications into a risk-based hierarchy, it meticulously delineates regulatory requirements that range from stringent compliance for high-risk categories to minimal oversight for lower-risk applications. This Act not only extends its jurisdiction to EU-based entities but also encompasses non-EU stakeholders whose AI systems exert influence within the EU market. The Act’s nuanced approach, emphasizing a balance between fostering technological innovation and safeguarding public interests through fundamental rights and safety standards, sets a global precedent for AI governance. Its implementation strategy, marked by a phased timeline and rigorous enforcement mechanisms, underscores the EU’s commitment to establishing a unified digital market for ethically aligned and secure AI technologies.

Author

  • Sanket Kamble

    Sanket Kamble is a seasoned cybersecurity professional with a rich background spanning 6 years, currently leading the Compliance & Audit practice at Network Intelligence. Holding certifications such as the Certified Ethical Hacker (CEH) and eLearnSecurity Junior Penetration Tester version 2 (eJPTv2), Sanket epitomizes expertise and dedication in the cybersecurity domain. His role as Practice Lead involves steering projects that align with stringent standards including ISO 27001, PCI DSS, and RBI circulars, alongside conducting comprehensive assessment activities. Sanket's philosophy centres around a relentless commitment to reshaping the cybersecurity landscape, emphasizing the necessity for continual adaptation and growth in response to the ever-evolving threats and challenges.


Leave a Reply

Your email address will not be published. Required fields are marked *

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

I agree to these terms.