The Cost of Non-compliance With the EU AI Act: A Look Into the Penalty Provisions
The EU AI Act is a brand-new regulation introduced by the European Union that has the potential to reshape our daily lives and steer the future development of artificial intelligence (AI) technologies. It was introduced with the aim of ensuring the safe and ethically acceptable use of AI technologies and came into effect on August 1, 2024. Companies and businesses that use, buy, deploy, or develop AI models are now surely wondering what obligations under this regulation and what consequences they might face for failing to comply.
Although it is crucial for companies to adjust their operations to meet the EU AI Act’s requirements to avoid fines, there is no need to panic. The regulation will not be fully enforced immediately – its provisions will gradually become mandatory over the span of two years, with full enforcement by August 2026.
As advised by the EU authorities, it is best for companies both within and outside the Union (if the EU AI Act applies to them), to prepare in a timely manner for its implementation and ensure compliance to avoid the steep fines that exceed even those foreseen by the GDPR.
The Best Way to Avoid Penalties Is to Comply With the EU AI Act
Organizations that adhere to the obligations under the EU AI Act are sure to avoid the penalties. However, a crucial question arises: which organizations need to ensure compliance, and does the AI Act apply to all AI models? Understanding who must comply and the extent of the Act’s reach is essential for companies to effectively navigate these new regulations and prevent potential fines.
Responsible entities under the EU AI Act
The EU AI Act defines specific roles and responsibilities for various stakeholders involved in the lifecycle of AI systems. Understanding who these stakeholders are is essential for ensuring compliance and avoiding penalties.
Developers and Providers are responsible for creating, supplying, or marketing AI systems. They must ensure their systems meet all regulatory requirements related to safety, transparency, and documentation.
Users of AI systems within the EU must adhere to transparency requirements and manage associated risks, ensuring that end-users are informed about their interactions with AI.
Importers and Distributors play a critical role in ensuring that AI systems entering the EU market comply with regulations. They are tasked with maintaining documentation, verifying compliance, and reporting any issues to national authorities.
Each stakeholder has specific obligations that must be met to avoid significant fines. Understanding these responsibilities helps organizations mitigate the risk of non-compliance and its associated penalties.
Additionally, the Act extends its jurisdiction beyond EU borders, meaning that organizations and entities outside of the EU (e.g., in countries like Serbia) that aim to market their AI models within the EU are also subject to the Act’s requirements and must ensure compliance.
Understanding the Risk Classification System of the EU AI Act
The EU AI Act introduces a structured approach to regulating AI based on risk levels. This classification system is fundamental in determining the extent of regulatory obligations and the penalties for non-compliance.
The Act categorizes AI systems into four risk tiers:
- Unacceptable Risks: AI practices posing unacceptable risks are outright banned. These include activities like social scoring and manipulative techniques, with severe penalties for violations.
- High Risks: AI systems classified as high-risk are subject to stringent regulations, including rigorous compliance obligations throughout their lifecycle. These systems, such as those used in sensitive areas like medical devices, must meet high standards for transparency and monitoring.
- Limited Risks: AI systems with limited risks are required to follow transparency obligations, but the regulatory requirements are less stringent compared to high-risk systems.
- Minimal Risks: For AI systems deemed to pose minimal risks, stakeholders are encouraged to develop voluntary codes of conduct. While not legally binding, these codes can help in managing potential risks and avoiding future issues.
Understanding this risk-based approach is crucial for organizations to assess their AI systems, implement necessary measures, and avoid costly penalties. The classification system helps define the obligations and the associated responsibilities for each type of AI system.
To conclude, imagine a small EU-based company creating an AI system to assess candidates for admission to a law school in an EU member state. Due to its potential impact on applicants’ rights, this system is classified as a high-risk AI system under Annex III of the EU AI Act. Consequently, the company must comply with rigorous regulations, including transparency and accuracy requirements, to avoid penalties. This includes conducting AI Conformity Assessments and implementing effective monitoring systems. As a developer of a high-risk AI system, the company is required to meet most of the Act’s provisions, which will undoubtedly incur substantial costs for the small firm.
Fines and Penalties: What Is the Cost of Non-compliance?
Having explored who needs to comply with the EU AI Act and the types of AI systems affected, it is essential to understand the repercussions of non-compliance. The EU AI Act imposes significant penalties for organizations that fail to meet its requirements. These penalties are designed to enforce compliance and ensure the integrity of AI technologies used across the EU. For organizations, especially small and medium-sized enterprises (SMEs), the financial impact of failing to adhere to these regulations can be severe. This section delves into the specifics of these penalties, outlining what organizations can expect if they do not comply with the Act’s provisions.
Tiered Penalty System
Similar to GDPR, the EU AI Act employs a tiered penalty system. The fines are determined based on the type of AI system, the seriousness of the violations, and the size of the company. Penalties may reach up to a specified maximum amount or a percentage of annual turnover. The penalty framework under the AI Act surpasses even the fines outlined in the General Data Protection Regulation (GDPR), which can be as high as €20 million.
Essentially, any entity required to adhere to the AI Act’s stipulations may face penalties if they fail to meet these obligations. This encompasses providers, whether individuals or organizations, authorities, institutions, or other entities involved in developing, placing on the market, or operating AI systems. Additionally, developers, importers, traders, or deployers of AI systems may also incur fines.
Types and Severity of Violations | Maximum Penalty | % of the Annual Turnover |
Prohibited AI Violations: Deploying or developing the prohibited AI | EUR 35 million | 7% |
General Compliance Breach: Violation of general regulatory requirements | EUR 15 million | 3% |
Information Accuracy Violation: Providing false and misleading information to authorities | EUR 7.5 million | 1.5% |
As shown above, the penalty structure is designed to ensure accountability and deter non-compliance, with penalties reaching up to a specified maximum amount or a percentage of annual turnover.
The Act outlines several categories of penalties for non-compliance, including:
- For Prohibited AI Practices: Deploying or developing AI systems deemed to pose unacceptable risks, which are outright prohibited under the Act, can result in fines of up to €35 million or 7% of the total global annual turnover from the previous fiscal year, whichever is higher. These penalties reflect the seriousness of engaging in activities that are considered harmful or unethical.
- For Non-Compliance with High-Risk and Limited-Risk AI Requirements: Entities failing to meet requirements for high-risk and limited-risk AI systems—such as data quality, technical documentation, transparency, human oversight, and robustness—may face fines of up to €15 million or 3% of the total global annual turnover from the previous fiscal year, whichever is greater. This tier of penalties addresses the need for rigorous adherence to standards designed to ensure the safety and effectiveness of AI technologies.
- For Providing False or Misleading Information: Fines for delivering false, incomplete, or misleading information to notified bodies and competent authorities can reach up to €7.5 million or 1.5% of the total global annual turnover from the previous fiscal year, whichever is higher. This provision emphasize the importance of transparency and accuracy in reporting to the competent authorities.
In addition to these set rules, the EU AI Act acknowledges the varying financial capacities of different organizations, including small and medium-sized enterprises (SMEs). Unlike the GDPR, which applies uniform penalties regardless of company size, the AI Act adjusts fines for SMEs. While SMEs are still subject to penalties, the Act ensures that the fines are applied more proportionately. Specifically, the Act provides for the same maximum amounts and percentages of fines as for larger organizations but applies the lower of the two amounts to SMEs, reflecting their financial constraints.
For instance, if the aforementioned small EU-based company developing a high-risk AI system for evaluating candidates for law school fails to comply with the AI Act’s requirements—such as ensuring transparency and accuracy in its AI system—it may face penalties under category II. Given that the company is classified as an SME, the Act stipulates that the fines will be applied at the lower amount of the two under category II.
In this case, if 3% of the company’s consolidated annual revenue amounts to €150,000, which is lower than the maximum fine specified (€15 million under category II), the company would be liable to pay €150,000. This reflects the Act’s approach of applying the lower of the two potential penalty amounts or percentages, providing some financial relief compared to larger organizations, but still representing a significant cost for the SME.
Overall, the penalties outlined in the EU AI Act are designed to ensure strict adherence to its standards, promoting the development of safe and reliable AI technologies within the EU. By imposing substantial fines for non-compliance, the Act aims to hold all stakeholders accountable for maintaining high levels of safety and transparency – the new EU AI standards.
Extension of Penalty Provisions to EU Authorities
The EU AI Act introduces a comprehensive framework for enforcing compliance, leveraging a decentralized approach where Member States are empowered to establish their own regulations and penalties. Each Member State is required to appoint at least one national authority responsible for overseeing adherence to the Act and conducting market surveillance.
In addition to EU institutions and authorities responsible for enforcing the Act, it is entirely possible that some of the EU institutions themselves use AI systems. What happens if these entities fail to comply with the AI Act’s provisions or fall out of alignment with its requirements?
The EU AI Act includes a special category for Union institutions, bodies, offices, and agencies that use AI systems. If these institutions fail to comply with the Act’s provisions, they will also face penalties. However, the fines imposed on these institutions are significantly lower compared to those for private companies. For violations related to prohibited AI systems, the fines for EU institutions can be up to €1.5 million, while other breaches may incur fines up to €750.000. Institutions have the right to contest the fines before they are finalized. The fines collected are allocated to the EU’s general budget, and the European Data Protection Supervisor (EDPS) is responsible for enforcing these fines and reporting on the collected amounts annually to the European Commission.
EU Bodies and Institutions | |
Types and Severity of Violations | Maximum Penalty |
Prohibited AI Violations: Deploying or developing the prohibited AI | EUR 1.5 million |
General Compliance Breach: Violation of general regulatory requirements | EUR 750.000 |
GPAI and GPAISR Classification
We already mentioned the four risk-based categories approach of the EU AI Act that include:
- Unacceptable Risk,
- High Risk,
- Limited Risk,
- Minimal Risk systems.
In addition to these classifications, the Act also addresses the concept of General-Purpose AI (GPAI) models and those with Systemic Risks (GPAISR). GPAI models are versatile and used across various sectors, while GPAISR models are identified based on their high-impact capabilities, either due to technical prowess or broad market reach. Both GPAI and GPAISR models impose additional obligations on their providers to effectively manage the associated risks.
However, a potential issue with this new concept is that the European Commission has the authority to designate certain AI models as GPAISR, which can lead to the imposition of new obligations and a specific system of penalties. This creates a potential for arbitrariness in decision-making and misuse, as the criteria for such designations are currently defined in legal terms rather than established through practical standards.
Despite these issues, the EU AI Act still provides penalties for this category of AI systems and the organizations that handle them. For instance, companies providing GPAI models could face fines of up to 3% of their annual global turnover or €15 million, whichever is higher, if they fail to comply with regulations, fail to provide requested documents or information, or obstruct access to their AI models for evaluation. The severity of the fine depends on the nature and duration of the infringement. Companies will be notified of any preliminary findings and will have an opportunity to respond before a final decision is made.
GPAI and GPAISR Provision Non-Compliance | ||
Types and Severity of Violations | Maximum Penalty | % of the Annual Turnover |
Specific Compliance Breach: Violation of specific regulatory requirements for GPAI and GPAISR | EUR 15 million | 3% |
To address these issues and concerns regarding GPAI and GPAISR, it is worth noting that the Court of Justice of the European Union has unlimited jurisdiction to review decisions by the Commission on fines. This allows the Court to cancel, reduce, or increase the fines imposed, offering a measure of oversight and fairness in the enforcement of penalties, especially given that the Commission independently decides whether to designate an AI system as GPAISR.
How to Properly Prepare and Avoid Penalties
To properly prepare for the EU AI Act and avoid potential penalties, organizations need a thorough understanding of their obligations under the EU AI Act. This involves staying up to date on the classification of their AI systems, implementing necessary safeguards, and conducting regular compliance checks. High-risk AI systems require special attention, as they come with strict requirements for transparency, data governance, and accountability.
Furthermore, it is crucial to consult with legal experts specializing in IT and AI law, as well as AI governance professionals, to ensure proper alignment with regulatory demands. Although the two-year period for full implementation and compliance with the AI Act may seem distant, it is crucial to begin the process early to ensure timely compliance.