Decoding the Debate: To Regulate AI or Not? An Insight into EU AI Act

In a landmark decision in June 2023, the European Parliament voted on the European Commission’s proposal for the EU Artificial Intelligence Act, marking a significant advancement in the regulation of artificial intelligence (AI) within the EU. This groundbreaking step is part of broader European efforts to protect personal data, akin to the General Data Protection Regulation (GDPR). While recognizing the numerous advantages AI offers in daily life, EU representatives underscored the necessity of implementing binding regulations to address potential risks. These risks include concerns related to privacy and security, lack of transparency in AI systems, and issues of bias and discrimination, all of which threaten fundamental rights protected by the EU Charter of Fundamental Rights and the European Convention on Human Rights.

As the final version of the EU AI Act is being negotiated, it is essential to explore AI regulations both within the EU and globally. This exploration involves understanding the adoption process of the EU AI Act, its scope, and the requirements it imposes on AI system providers and users. By examining AI governance alongside the EU’s commitment to data protection, we can better appreciate the complex regulatory framework aimed at balancing AI innovation with the protection of individual rights.

Global Perspectives on AI Regulation Globally, there is increasing recognition of the need for AI regulation. Until recently, the United States had no mandatory AI regulations, relying only on the American National Institute of Standards and Technology’s (NIST) AI Risk Management Framework as a voluntary guide for responsible AI development. On October 30, 2023, the U.S. President issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, requiring AI developers to report their safety results and critical information to the government. Meanwhile, China’s Cyberspace Administration is considering proposals for AI regulation, and the UK is developing a framework designed to encourage innovation while addressing AI-related issues.

Internationally, the Organisation for Economic Co-operation and Development (OECD) introduced a non-binding Recommendation on AI in 2019, and UNESCO adopted Recommendations on the Ethics of AI in 2021. The Council of Europe is currently working on an international convention to regulate artificial intelligence.

On November 1, 2023, global leaders convened in the UK to address the urgent issue of AI. They signed the Bletchley Declaration on AI Safety, which highlights the “urgent need to understand and collectively manage potential risks through a new joint global effort.” Follow-up meetings have been scheduled in South Korea and France to continue this collaborative effort.

 

Scope of the EU AI Act

 

In recent years, setting rules for artificial intelligence has been a major focus for European Union representatives. While earlier documents such as the White Paper on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, and Policy and Investment Recommendations were non-binding, policymakers have recognized the need for enforceable regulations due to the rapid advancement of AI technology. The European Commission proposed the EU AI Act to establish conditions for AI systems within the EU market, prevent market fragmentation, encourage the development of AI while safeguarding citizens’ rights, and ensure compliance with these regulations.

The Act seeks to implement risk-based conditions for AI systems. This approach involves categorizing AI systems based on the level of risk they pose to society. AI systems deemed high-risk will face stricter regulations and controls, potentially leading to their prohibition, whereas lower-risk systems will be subject to more relaxed transparency requirements.

Additionally, the Act aims to legally define AI systems, despite the lack of a universally accepted definition in the scientific community.

 

Who Will be Affected by the EU AI Act?

 

The proposed regulations primarily target AI system providers within the EU and extend to those from third countries that either introduce AI systems to the EU market or use them within EU borders. This approach combines both territorial and extraterritorial applicability, similar to the GDPR.

To prevent evasion, the regulations will also cover AI system providers and users from third countries if their systems’ outputs are used within the EU. However, the draft regulation does not apply to AI systems designed solely for military purposes, those used by public authorities in third countries, or international organizations. Exemptions also include AI systems used by authorities under international agreements related to law enforcement and judicial cooperation.

 

Risk Assessment in Focus: What Category Does Your Business Fall Into?

 

The draft AI Act employs a risk-based approach, tailoring legal measures to the level of risk associated with different AI systems. To implement this strategy, the Act classifies AI systems into various categories based on their risk profiles. These categories help determine the appropriate regulatory requirements for each type of AI system.

 

Risk Level  —–  Measures

Unacceptable risk—-Prohibited AI practices

High Risk—–Regulated high-risk AI system

Limited risk—-Transparency obligations

Law risk—-No obligations

 

1. Unacceptable Risk

 

AI systems classified as presenting an unacceptable risk—meaning they pose a clear and significant threat to individuals’ rights and safety—will be banned from the EU market. This category includes:

  • AI systems that use harmful manipulative techniques, such as subliminal messaging;
  • AI systems targeting particularly vulnerable groups (e.g., individuals with physical or mental disabilities);
  • AI systems employed by public authorities, or on their behalf, for social scoring purposes;
  • Real-time remote biometric identification systems in publicly accessible areas for law enforcement, with exceptions for limited cases such as post-incident identification for serious crimes, contingent upon court approval.

 

2. High Risk

 

AI systems that could affect people’s rights but present a lower level of risk than those in the unacceptable category are deemed high-risk. Examples include AI systems used in products regulated under EU safety legislation (e.g., toys, aviation, automobiles, medical devices, elevators). This category also includes eight specific areas, which the European Commission may update as needed. Currently, these areas are:

  • Biometric identification and categorization of individuals;
  • Management and operation of critical infrastructure;
  • Education and vocational training;
  • Employment, worker management, and access to self-employment;
  • Access to and enjoyment of essential private and public services;
  • Law enforcement;
  • Migration, asylum, and border control management;
  • Administration of justice and democratic processes.

 

High-risk AI systems will be subject to the following requirements:

  • Mandatory registration in an EU database for providers subject to EU rules, or self-assessment of compliance for providers not governed by EU rules.
  • Compliance with regulations on cybersecurity, data protection, and risk management.
  • Non-EU providers must designate a representative within the EU to ensure compliance with the AI Act.

 

3. Limited Risk

AI systems with limited risk must adhere to basic transparency standards to allow users to make informed decisions. Users should be notified when interacting with AI, including systems that generate or modify content such as deepfakes.

 

4. Low or Minimal Risk

AI systems classified as low or minimal risk can be developed and used in the EU without additional legal obligations. However, the proposed AI Act suggests creating codes of conduct to encourage providers of non-high-risk AI systems to voluntarily adopt requirements similar to those for high-risk systems [1] .

 

Stalemate Over Foundation Models: France, Germany, and Italy Challenge EU’s Draft AI Legislation

 

France, Germany, and Italy have become central in a dispute over the regulation of “foundation models”—advanced AI systems known for their broad capabilities and applications. The countries argue that stringent regulations on these models could hinder Europe’s progress in AI technology. They propose a regulatory framework that promotes innovation and competition, supporting European firms like Mistral in France and Aleph Alpha in Germany. Their approach involves self-regulation through company commitments and codes of conduct. However, critics warn that this could result in unenforceable rules for powerful and potentially dangerous AI systems, raising concerns about the effectiveness of regulation. The stark differences in views have created a deadlock, threatening the overall negotiation process for the Artificial Intelligence Act.

 

Governance, Enforcement, and Sanctions

 

While negotiations continue regarding the mandatory nature of the AI Act’s provisions and their implementation, the draft proposes the following measures:

  • Member States must appoint competent authorities, including a national supervisory authority, to enforce the regulations.
  • The European Artificial Intelligence Board will be established at the EU level.
  • National market surveillance authorities will monitor compliance with obligations for high-risk AI systems, with access to confidential information. They will enforce corrective measures (e.g., prohibition, restriction, withdrawal, or recall of AI systems) for non-compliance, with Member States intervening in case of persistent issues.
  • Administrative fines of up to €30 million or 6% of total worldwide annual turnover may be imposed for violations, with enforcement responsibilities falling to Member States.

 

Upcoming discussions on December 6, 2023, will focus on resolving key issues, including:

  • Exemptions for law enforcement, particularly regarding facial recognition technologies used by national authorities;
  • Regulation of foundation models; and
  • Governance and enforcement challenges [2] 

 

Despite existing disagreements, enacting binding AI legislation remains crucial to protect citizens’ fundamental rights while fostering technological advancement within the EU and beyond.

 

_______________

[1] https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[2] https://www.euronews.com/next/2023/10/23/eu-ai-act-nearing-agreement-despite-three-key-roadblocks-co-rapporteur