The European Commission has highlighted the importance of a human-centric approach in the deployment of Artificial Intelligence (AI) systems to ensure their safe and ethical use. This approach requires that AI technologies be designed to benefit humanity, upholding human rights and dignity by ensuring that human oversight is maintained. Specifically, the proposal mandates that AI systems must be designed in a way that allows for effective human control or intervention, ensuring that decisions and outputs from the system can be reviewed by a human. This implies that AI systems should not function as “black boxes” but rather be transparent and understandable, allowing an average person to comprehend how the system operates and how specific outputs are generated.
However, a potential issue with the Act’s provisions might create a legal grey area, potentially shifting responsibilities from users of AI systems (acting as overseers) to the designers of these systems. This ambiguity could lead to legal challenges, as human overseers might argue that the regulations are not intended to apply to them.
Moreover, the Act lacks detailed guidance on the specific responsibilities of human overseers. Despite the emphasis on human oversight as a means to mitigate the risks of harmful algorithms, there is insufficient clarity on what constitutes effective human oversight. Ideally, combining algorithms with human oversight could provide a balance between the accuracy and consistency of algorithmic decision-making and the nuanced judgment of human intervention. The proposal aims to ensure that human oversight minimizes or prevents the infringement of fundamental rights by high-risk AI systems. Yet, the lack of clear directives on the role and responsibilities of human overseers undermines this human-centric approach.
For instance, the proposal does not specify when an individual affected by a high-risk AI system has the right to request human intervention. Should human oversight be initiated independently by the overseer, or at the request of the affected individual? Additionally, it is unclear when a human overseer is required to protect citizens from various risks associated with AI systems. While there is some responsibility placed on human overseers to possess the necessary competence and authority, the Act does not address their role in interpreting or interrupting the AI system when necessary.
In summary, while human oversight is intended to act as a safeguard against the negative impacts of AI systems, the current framework may not ensure effective protection. If human intervention is critical for safeguarding rights, the proposal could benefit from more explicit definitions ensuring that AI systems are designed to be understandable and transparent. This would help ensure that these safeguards are both meaningful and enforceable, preventing AI systems from operating as opaque “black boxes.”