Navigating the Impact of the AI Act on European Industry: Striking a Balance between Legal Certainty and Innovation
Enforcing the EU AI Act involves a multi-faceted approach that integrates regulatory oversight, industry self-regulation, and technological solutions. It is crucial that AI Law on paper leads to AI Law in action.
In this three-part series, we take a look at the history of the EU AI Act, the rationale for its origin, and the measures put in place to ensure its effective implementation.
We then explore the challenges faced by companies in navigating this complex regulatory landscape, and how the AI Act could potentially help or hinder European industries.
Next Steps for EU AI Act - Governance Structure, Enforcement, and Industry self-regulation mechanisms
“Even though companies may have their own frameworks, adhering to established standards and regulations is essential to ensure their approach is universally applicable and accepted.” - Dr. Andreas Hauser, CEO, AIQURIS.
At the European Union level, the European AI Office, along with the European AI Board comprising representatives from each of the member states, will administer and enforce the Act. At national level, several competent national agencies in the 27 member states will ensure that the principles of the AI Act are followed by the regulated entities.
Moreover, “Regulatory sandboxes” and “real-world testing” have been included to support the growth of small and medium enterprises (SMEs) and start-ups. This will offer a regulated setting for the development, training, and testing of AI technologies prior to their release for public utilization.
Under the AI regime, citizens have the right to seek redressal and receive explanations on the impact of AI systems on their rights. Sanctions for violations range from 7.5 million Euros to 35 million Euros or a percentage of turnover, with smaller companies benefiting from capped fines.
Industry self-regulation mechanisms would complement regulatory efforts by fostering a culture of responsible AI development and deployment. Trade associations and industry consortia could establish voluntary codes of conduct or best practices that companies can adopt to demonstrate their commitment to ethical and trustworthy AI. These initiatives could include guidelines on data privacy, transparency, accountability, and fairness in AI algorithms. By voluntarily adhering to these standards, companies can not only mitigate the risk of regulatory penalties but also enhance their reputation and build trust with consumers. Moreover, industry self-regulation can facilitate knowledge sharing and collaboration among companies, enabling them to collectively address emerging challenges and ethical dilemmas in AI development.
In addition to regulatory oversight and industry self-regulation, technological solutions can play a pivotal role in enforcing the EU AI Act. Third-party AI-powered tools and systems can be deployed to monitor and analyse AI applications for compliance with regulatory requirements in real-time. For instance, companies can leverage AI-based auditing software to conduct continuous assessments of their AI systems, identifying potential biases or risks that may arise during operation. By integrating such technological solutions into their AI governance frameworks, companies can proactively detect and address compliance issues, thereby reducing the likelihood of regulatory sanctions and enhancing trust in AI-driven technologies.