AI Act has entered Into Force

Artificial Intelligence is set to transform a wide array of global sectors, offering both groundbreaking benefits and unprecedented challenges. The European Union (EU), acknowledging AI’s crucial role, has developed a thorough regulatory framework to tackle these challenges and ensure AI’s potential is realized responsibly.

The EU AI Act, which was published in the Official Journal of the EU on July 12, 2024, represents a pivotal advancement in this initiative.

In this blog, we will explore the essential provisions, timelines, and implications of the EU AI Act, examining how it establishes a global benchmark for AI regulation.

 

Why the AI Act Matters for Businesses

 

The introduction of the AI Act signifies a major shift in how businesses must handle the development, deployment, and use of artificial intelligence.

Companies need to prepare for heightened accountability throughout the AI lifecycle.

Violations of the AI regulations could have extensive consequences, potentially affecting managers and CEOs personally. This increased accountability means developers must design their AI systems with regulatory compliance in mind, ensuring adherence to the Act’s rigorous standards.

By doing so, businesses can mitigate risks, build trust, and responsibly leverage AI’s transformative potential.

 

What Are the Penalties for Non-Compliance with the AI Act

 

As outlined in our previous blog, the consequences of violating or failing to comply with the AI Act are expected to surpass those associated with GDPR infractions.

The AI Act imposes severe penalties for non-compliance, highlighting the critical importance of adhering to its regulations.

Companies found in violation could face fines up to EUR 35 million or 7% of their global annual turnover from the previous financial year, whichever amount is higher.

Additionally, for non-compliance with most other provisions, the Act stipulates fines of up to EUR 15 million or 3% of total worldwide annual turnover, whichever is higher.

These substantial penalties emphasize the necessity of following the AI Act’s guidelines, encouraging businesses to adopt ethical and responsible AI practices to avoid significant financial consequences.

 

Timeline for Entry into Force and Implementation

 

The EU AI Act came into effect 20 days following its publication, starting on August 1, 2024.

However, its provisions will be rolled out gradually over the subsequent years to ensure a smooth transition and give stakeholders adequate time to achieve compliance.

Most of the Act’s provisions will be enforced beginning August 2, 2026, exactly 24 months after it comes into force.

From August 2, 2026, several key obligations will take effect. These obligations will particularly impact high-risk AI systems outlined in Annex III, which covers applications in areas such as biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration, and justice administration.

By this date, member states must have enacted rules regarding penalties, including administrative fines, for non-compliance. Additionally, they must have established at least one functioning AI regulatory sandbox to support the safe and innovative development of AI technologies. The European Commission will also assess and potentially revise the list of high-risk AI systems to ensure its ongoing relevance and thoroughness.

Certain critical provisions, however, will have different implementation timelines due to their specific urgency and complexity.

 

Exceptions to the General Timeline

 

  • Prohibitions on Unacceptable Risk AI: The regulations concerning the prohibition of AI systems deemed to pose unacceptable risks to safety, fundamental rights, and public interests will come into effect six months after the Act’s entry into force, on February 2, 2025. This expedited timeline highlights the EU’s commitment to promptly addressing the most severe risks associated with AI technologies.
  • Obligations for Providers of General-Purpose AI Models: The requirements for providers of general-purpose AI models will take effect 12 months after the Act’s entry into force, on August 2, 2025. By this date, member states are expected to have designated their competent authorities. Additionally, the European Commission will review the legislative framework and consider amendments to the list of prohibited AI models. This timeframe allows providers to adjust to the new regulations and for authorities to set up the necessary oversight mechanisms.
  • Post-Market Monitoring: The European Commission will introduce regulations on post-market monitoring 18 months after the Act comes into force, on February 2, 2026. This provision ensures continuous oversight and accountability for AI systems once they are deployed, reinforcing the EU’s focus on ongoing vigilance.
  • Provisions Related to High-Risk AI Systems: Obligations concerning high-risk AI systems not listed in Annex III but used as safety components of products will become applicable 36 months after the Act’s entry into force, on August 2, 2027. Additionally, regulations will apply to high-risk AI systems that are products themselves and require third-party conformity assessments under existing EU regulations, such as those for toys, radio equipment, in vitro diagnostic medical devices, civil aviation security, and agricultural vehicles.
  • Large-Scale Information Technology Systems: By the end of 2030, obligations will extend to certain AI systems that are components of large-scale information technology systems governed by EU law, including those related to freedom, security, and justice, such as the Schengen Information System. This extended timeline reflects the complexity and importance of these systems, ensuring thorough vetting and security for AI integration.

 

Delegated Acts by the European Commission

The EU AI Act empowers the European Commission to issue delegated acts on several key areas, enhancing the regulatory framework’s flexibility and adaptability. Delegated acts are non-legislative measures that modify or add to the non-essential elements of the legislation. The Commission’s authority to issue these acts is initially valid until August 2, 2029, with an option to extend for an additional five years.

Areas where delegated acts will be issued include:

  • Definitions of AI systems
  • Criteria and use cases for high-risk AI
  • Thresholds for general-purpose AI models with systemic risks
  • Technical documentation requirements for general-purpose AI
  • Conformity assessments
  • EU declarations of conformity

 

These delegated acts enable the Commission to update and refine the regulatory framework in response to technological progress and emerging risks, ensuring that the EU AI Act remains current and effective.

 

Codes of Practice and Guidance

 

The EU AI Act underscores the need for practical guidelines to ensure compliance and promote a consistent understanding among stakeholders.

The AI Office is responsible for developing codes of practice related to the obligations of providers of general-purpose AI models. These codes must be finalized by May 2, 2025, and will come into effect after a minimum three-month preparation period, giving stakeholders ample time to adapt.

Additionally, the European Commission is authorized to provide guidance on various aspects of the Act, including:

  • Reporting incidents involving high-risk AI by August 2, 2025
  • Practical implementation of high-risk AI requirements, including examples of high-risk and non-high-risk use cases by February 2, 2026
  • Prohibited AI practices, as deemed necessary
  • Application of the definition of an AI system, as deemed necessary
  • Requirements for high-risk AI systems, as deemed necessary
  • Practical implementation of transparency obligations, as deemed necessary
  • Interaction of the AI Act with other EU laws and its enforcement, as deemed necessary

 

This extensive guidance aims to ensure clarity and consistency in the application of the EU AI Act, addressing potential uncertainties and promoting a unified approach across member states.

 

Implications for AI Development and Deployment

 

The EU AI Act sets a significant global benchmark for AI regulation, with extensive implications for the development and deployment of AI technologies. By implementing a robust regulatory framework, the EU aims to balance innovation with safety, transparency, and accountability. The Act’s emphasis on high-risk AI systems adopts a risk-based approach, targeting major threats while allowing innovation to flourish in lower-risk areas.

Fostering Responsible Innovation: The Act promotes responsible innovation by establishing clear standards and expectations for AI systems. This regulatory clarity is likely to build trust and confidence among consumers and businesses, encouraging broader adoption of AI technologies across various sectors.

Enhancing Safety and Accountability: Through stringent assessments and ongoing monitoring of high-risk AI systems, the Act improves safety and accountability. These requirements are designed to prevent harm and ensure that AI technologies are deployed ethically and responsibly.

Aligning with Global Standards: The EU AI Act aligns with global efforts to regulate AI, contributing to the creation of international standards. By spearheading AI regulation, the EU has the opportunity to shape global policies and practices, fostering a unified approach to AI governance.

Supporting Market Competitiveness: The Act promotes market competitiveness by ensuring that all AI developers and providers operate under the same regulatory standards. This level playing field helps prevent unfair advantages and encourages healthy competition, driving innovation and growth in the AI sector.

 

Conclusion

 

The EU AI Act marks a significant milestone in the regulation of artificial intelligence, establishing a thorough and future-oriented framework for AI development and deployment. Its gradual implementation, along with the use of delegated acts, codes of practice, and guidance, provides a balanced strategy that fosters innovation while protecting public interests. As global discussions continue on the transformative impact of AI, the EU AI Act stands as a key regulatory example, offering important lessons and insights for policymakers and stakeholders around the world.