Are you Ready for EU AI Act?

Artificial intelligence has consistently been a major topic of public interest due to its potential and the implications it has for citizens’ rights and protections. It was only a matter of time before comprehensive legal regulations were established to oversee this area, and that moment has now arrived with the adoption of the European Union Artificial Intelligence Act!

After extensive negotiations, the European Parliament and the Council of Europe reached an agreement on the AI Act draft in December 2023, and the final version was adopted by the Council of the European Union on May 21, 2024.

The implementation of this EU initiative is already in progress. In January of this year, the European Commission established the AI Office, and an official website for the AI Act has been launched to provide information about this groundbreaking regulation.

The text of the AI Act is expected to be published soon in the Official Journal of the EU and will come into effect twenty days after its publication. The initial set of legal obligations is anticipated to start by the end of 2024. Organizations that use or plan to use AI systems should familiarize themselves with the new requirements introduced by the AI Act and assess their AI systems’ compliance to ensure readiness for the forthcoming regulations.

 

What Are Artificial Intelligence Systems?

 

The AI Act sets forth rules for AI systems, including prohibitions and specific requirements for certain types of these systems, as well as measures to support innovation, particularly for small and medium-sized enterprises.

To understand the scope of the Act, it’s essential to first identify what constitutes an AI system. Instead of delving into the complex legal definition of “AI systems,” here are some examples:

  • Recommendation Systems: Used by platforms like Netflix or Spotify to suggest movies, TV shows, or music based on user preferences and history.
  • Autonomous Vehicles: Self-driving cars that use sensors to make decisions about navigation, speed, and avoiding obstacles.
  • Chatbots and Virtual Assistants: Conversational agents like Siri, Alexa, or customer service chatbots that provide responses and recommendations based on user interactions.
  • Content Generation Tools: AI systems that produce text, images, or videos, such as GPT-3 for text generation or tools for creating artwork or music.
  • Smart Home Devices: Devices like smart thermostats or security cameras that learn from user behavior to optimize home settings or alert owners to unusual activities.
  • Biometric Systems: Systems that use personal characteristics for authentication or identification.
  • AI Systems for Recruitment: Used for analyzing and filtering job applications.

 

These examples represent just a fraction of the many types of AI systems in development.

 

Who Is Covered by the AI Act?

 

The AI Act encompasses a wide range of entities, including:

  • AI Providers: Organizations that create AI systems, either independently or in collaboration with others.
  • AI Deployers: Entities that utilize AI systems for any purpose other than personal use.
  • Importers and Distributors: Parties involved in bringing AI systems into the market and distributing them.
  • Manufacturers: Entities responsible for producing AI systems.
  • Individuals Affected by AI Systems: Persons located within the EU who are impacted by AI systems.

 

If an individual or organization falls into any of these categories, they are subject to the AI Act and its associated obligations.

Similar to the GDPR, the AI Act has both territorial and extraterritorial reach. This means that, while it is an EU regulation, its provisions can apply to individuals and companies outside the EU under certain conditions.

Specifically, the AI Act applies to:

  • Providers in non-EU countries who place AI systems on the EU market or use them within the EU.
  • Providers and users in third countries whose AI system outputs are utilized within the EU.

 

For example, an IT company based in Serbia (or any non-EU country) that develops an AI recruitment software solution for use in the EU will be required to comply with the AI Act. The extent of these obligations will depend on the risk level associated with the AI system, as outlined below.

With some exceptions (e.g., systems used solely for military, defense, and research purposes), the AI Act applies to:

  • Both the private and public sectors,
  • Entities within and outside the EU,
  • A broad range of stakeholders involved in the development, deployment, and use of AI systems, including providers, importers, distributors, and users.                                                                                                                                                                             

The AI Act’s Approach to Regulating Artificial Intelligence

 

The AI Act employs a risk-based framework for regulating artificial intelligence. This means that the obligations and legal requirements are proportionate to the level of risk an AI system presents to individual rights and societal freedoms.

The AI Act classifies risks into four levels, with specific obligations assigned to each level. These risk levels are:

  1. Unacceptable Risk: AI systems that pose significant threats to rights and safety are prohibited.
  2. High Risk: AI systems with substantial risks to individuals’ rights are subject to stringent regulations.
  3. Transparency: AI systems that present lower risks must adhere to basic transparency requirements.
  4. Minimal Risk: AI systems deemed to have minimal risk are largely unregulated.
 

Categories of AI Systems Under the AI Act

 

  1. Unacceptable Risk: AI systems deemed to pose unacceptable risks are prohibited by the Act. Examples include:

    • Voice-activated toys that promote harmful behavior in children.
    • Systems that categorize individuals based on their behavior, socio-economic status, or personal attributes.
    • Real-time remote biometric identification systems, such as facial recognition, with limited exceptions.                                                                                                                                                                                                                        
  2. High Risk: For AI systems categorized as high risk, the Act stipulates various requirements. These include risk management, data quality and security, transparency, performance documentation, maintenance and updates, human oversight, and accuracy. Providers and implementers of high-risk systems must adhere to obligations such as registration, quality management, system testing, monitoring, record-keeping, and incident reporting. High-risk systems encompass those used in sectors like education, employment, essential services, law enforcement, border control, and justice administration.

  3. Limited Risk: The AI Act also addresses limited-risk systems, which are subject to transparency requirements. This category includes, for instance, customer support chatbots that offer automated responses to user questions.

  4. Minimal Risk: AI systems classified as minimal risk are not subject to specific regulations under the Act. Examples include email filters used for spam detection. However, EU authorities encourage voluntary adherence to the AI Act’s regulations and measures for these lower-risk systems.

 

Conclusion: Higher Risk Equals Greater Obligations and Consequences

The AI Act establishes that as the risk associated with an AI system increases, so do the obligations and penalties for non-compliance.

The Act also specifically addresses general-purpose artificial intelligence models (GPAI), which are versatile AI systems capable of performing a wide array of tasks, irrespective of their marketing or integration into different applications (like the popular ChatGPT). For GPAI models, the Act imposes specific requirements on providers and users, including:

  • Providing detailed technical documentation to the supervisory authority upon request.
  • Informing users about the model’s capabilities, limitations, and construction, and publicly sharing a comprehensive summary of the training data used.
  • Ensuring robust cybersecurity measures and conducting regular evaluations of the model.
  • Appointing an EU-based representative for providers from outside the EU who place their AI systems on the EU market.

 

For further details on the classification of AI systems by risk, their obligations, and regulatory measures, please refer to our blog, “Exploring the EU AI Act.”

 

Compliance Timeline

 

The AI Act will take effect 20 days after its publication in the Official Journal of the European Union, which is anticipated to occur in June this year.

Full implementation of the AI Act begins 24 months after the Act comes into force, with the exception of certain provisions.

 

How to Prepare for the Implementation of the AI Act

 

Given the complexity of the AI Act and its numerous new requirements, it is crucial for entities and organizations to take a proactive approach to compliance. This involves implementing the necessary measures ahead of the regulation’s application.

To ensure readiness, we recommend the following steps:

  1. Map Your AI Systems Begin by conducting a comprehensive inventory of all AI systems you are currently developing, marketing, using, or planning to use.

    During this process, identify the intended users of these systems, their functionalities, and your organization’s role (whether as developer, user, or marketer within the EU). Additionally, assess which risk category each AI system falls into to determine if the AI Act applies to your organization.

  2. Understand Your Obligations Once you have identified which of your AI systems are covered by the AI Act, familiarize yourself with the specific obligations imposed by the Act. The nature of these obligations will vary based on your role and the risk level associated with each AI system.

  3. Conduct a Compliance Assessment Perform a detailed evaluation to identify gaps between your current practices and the AI Act’s requirements. This assessment should include:

    • Identifying discrepancies between your current operations and the new legal requirements.
    • Pinpointing gaps and areas needing improvement within your organization or AI systems.
    • Developing a compliance plan that includes a risk assessment, measures to address identified risks, documentation requirements, and strategic changes with deadlines (such as adapting, withdrawing, or redesigning AI systems).
    • Determining the resources needed to implement the compliance plan.
    • Creating a legal framework for AI governance tailored to your business.                                                                                                                               
  4. Implement Security Measures After creating your compliance plan, implement it along with basic risk mitigation strategies. For high-risk AI systems, this may involve:

    • Establishing human oversight mechanisms.
    • Implementing transparency measures to inform users about AI decision-making processes.
    • Developing risk management procedures, including data quality testing and documentation.
    • Enhancing cybersecurity to protect against potential threats and establishing protocols for security breaches.
    • Conducting training and awareness programs to familiarize employees with AI Act requirements.
    • Investing in AI compliance expertise.                                                                                                                                    These measures will help you meet regulatory requirements, safeguard your systems, enhance your reputation, and build trust with users and stakeholders.

     

  5. Develop AI Policies Now is the time to create or update comprehensive internal guidelines and policies related to AI integration within your organization, regardless of your industry or role.

    Collaborate with departments such as IT, security, legal, and human resources to develop or revise policies. Key considerations include:

    • Role-based access controls for AI systems.
    • Ownership and intellectual property rights related to input data.
    • Confidentiality measures for AI development and use.
    • Data protection protocols if personal data is processed by AI systems.
    • Procedures to ensure the accuracy, quality, and legality of AI systems.
    • Defining responsibilities and consequences for non-compliance with policies.                                                                                      
  6. Stay Informed on AI Regulation To effectively prepare for the AI Act’s implementation, it’s essential to stay updated on regulatory developments and practices related to AI. Monitor updates from supervisory authorities, guidelines, and best practices to adjust your compliance strategy proactively.

 

Consequences of Violating the AI Act

 

The penalties for non-compliance with the AI Act can be substantial, potentially exceeding those for GDPR violations.

Violations of the AI Act can result in severe fines, with the most stringent penalties applying to breaches of the prohibition on certain high-risk AI systems. These fines can reach up to EUR 35 million or 7% of global annual turnover, whichever is higher.

For non-compliance with other provisions of the Act, fines may amount to EUR 15 million or 3% of global annual turnover, whichever is greater.

With member states, including Spain, already establishing supervisory bodies and the EU AI Office up and running, authorities are actively preparing for the enforcement of the AI Act.

To avoid facing these significant penalties and reputational damage, it is crucial to ensure that your business practices related to AI development, testing, deployment, and use are fully compliant with the AI Act before its enforcement begins.