Navigating the Impact of the AI Act on European Industry: Striking a Balance between Legal Certainty and Innovation
The advent of Artificial Intelligence (AI) brings promise – as well as peril - to industries across Europe. As companies rush to adopt AI technologies to gain a competitive edge, concerns about legal certainty, innovation hindrance, and proper implementation loom large.
In this three-part series, we take a look at the history of the EU AI Act, the rationale for its origin, and the measures put in place to ensure its effective implementation.
We then explore the challenges faced by companies in navigating this complex regulatory landscape, and how the AI Act could potentially help or hinder European industries.
The European Union's AI Act: A Regulatory Milestone
Background and Rationale
Artificial Intelligence could contribute more than US$15 Trillion to the global economy in the next 10 years, but is currently highly unregulated with the potential for security and data breaches. The EU AI Act aims to establish precise guidelines and responsibilities for AI developers and implementers concerning AI software, and block the use of AI applications that pose unacceptable risks.
It is a comprehensive legal framework that addresses common concerns about potential risks associated with AI, such as privacy infringements, unethical practices, discrimination, and threats to fundamental rights. Simultaneously, it aims to alleviate administrative and financial pressures on businesses, particularly small and medium-sized enterprises (SMEs).
A brief look at the timeline: The agreement on the European Union Artificial Intelligence Act (EU AI Act) was finalised by the European Parliament and the Council of the European Union on December 9, 2023 - more than two years after its conceptualization, and after three days of intense negotiations to incorporate human oversight over AI (to account for the advent of OpenAI’s Chat GPT and other GenAI systems).
Subsequently, on February 2, 2024, the EU's Artificial Intelligence Act received unanimous approval from the Council of EU Ministers, overcoming previous delays and debates.
Next steps involve anticipated adoption and implementation, including the establishment of an AI Office, Regulatory Sandboxes, and expert advisory group to ensure alignment with other EU regulations. With this, the EU has become one of the first AI regulators in the world.
Key Provisions of the EU AI Act
The Act follows “risk-based” approach”, categorising AI systems based on the risks they pose:
Unacceptable Risk: This category includes AI systems such as social scoring, real-time biometric identification, biometric categorization, and cognitive manipulation. Such systems will be banned outright due to their high potential for harm.
The Act introduces exceptions for the use of unacceptable risk AI systems, allowing their deployment only in cases of very serious crimes and subject to judicial approval.
- High Risk: Includes AI systems used in critical domains such as medical devices, educational institutions, and those falling under EU’s product safety legislation. They will adhere to strict requirements, such as high-quality data sets, human oversight, setting up regulatory sandboxes, etc.
- Specific Transparency Risk: Users should be informed of interactions with AI systems like chatbots, while providers must ensure the labelling of deep fakes and AI-generated content, alongside notifying users of biometric and emotion recognition systems, and making synthetic content detectable in a machine-readable format.
- Limited Risk: The majority of commonly used AI applications fall under this category. No specific limitations are placed on their use, but voluntary codes of conduct are recommended.
- General Purpose and Generative AI: This category encompasses systems like OpenAI’s ChatGPT. Transparency obligations are imposed on these systems, requiring adherence to EU copyright law.