Published June 6th, 2025

Regulating AI Before It's Too Late: Ethics, Risks, and the Global Governance Imperative

A deep dive into how ethics, risk, and global collaboration can steer AI toward responsible development with Raju Chellam.

By AIQURIS

Artificial Intelligence (AI) is accelerating faster than most regulatory frameworks can adapt. Its capabilities are redefining industries, economies, and even how we work, learn, and make decisions. Since 2010, the training computation of AI systems has doubled approximately every six months, underscoring the rapid pace of advancement1. But with this rapid advancement comes a sobering truth: the need for strong governance, ethical safeguards, and global cooperation has never been more urgent.

In a recent podcast between Dr. Martin Saerbeck, CTO and Co-founder of AIQURIS, and Raju Chellam, Editor-in-Chief of the AI Ethics & Governance Body of Knowledge, they explored how we can balance innovation with existential risk. They addressed the limitations of current governance frameworks, the ethical dilemmas posed by generative AI, and the importance of global coordination to mitigate harm.

The Global Ethics Gap in AI Governance

Despite rising awareness, there remains a critical ethics gap in how nations regulate and monitor AI systems. Some countries move swiftly with regulation like the EU AI Act, while others struggle to define enforcement mechanisms. In contrast to Europe and Singapore, other countries are still developing baseline frameworks. Singapore has emerged with bold initiatives such as the Digital Enterprise Blueprint (DEB) and GPT-Legal, and is equipping businesses with the infrastructure to adopt AI responsibly. Spearheaded by Senior Minister of State Tan Kiat How, these frameworks prioritise ethical AI integration, cloud adoption, cybersecurity, and workforce upskilling, ensuring governance evolves alongside innovation2.

As Raju Chellam noted in the podcast, “The hard part of AI governance is not defining principles. Most frameworks have the same list: fairness, transparency, accountability. The hard part is enforcement.”

Ethical AI must move beyond principles into practice. This means integrating impact assessments, audit mechanisms, and real-time risk tracking into every stage of AI development.

According to Singapore’s Minister for Digital Development and Information, Josephine Teo, “At some point, global frameworks are going to be critical. If you talk to businesses operating in multiple jurisdictions and in the digital domain, the porosity means that you are in many markets and have to deal with a different set of rules for each market that you go to. The lack of interoperability is a great impediment to business expansion. It is also difficult to ensure the citizens of these jurisdictions are protected to a comparable degree.”3

Existential Risk Isn’t Science Fiction

One of the most striking aspects of the podcast was the serious concern over existential risk. With the rise of foundation models, large language models (LLMs), and multi-agent systems, there’s an increased possibility of cascading failures or manipulative behavior emerging at scale primarily because they operate with high autonomy, opaque decision-making, and are often trained on biased or misinformed datasets. Watch the podcast for the entire conversation.

Raju Chellam shared insights about how deepfakes, automated abuse, and weaponised disinformation are already rampant. He emphasised that the risk isn’t just theoretical, it’s already here, especially when it comes to systems that amplify harm or evade oversight.

These risks demand systems built not just for accuracy, but for accountability and resilience, something AIQURIS actively supports.

Why Current Governance Structures Fall Short

Many organisations rely on voluntary compliance or rely too heavily on tech-driven solutions like "red-teaming" or post-hoc audits. But this is reactive, not preventive. Worse, tech leaders often assume good intentions will be enough.

“We are trying to fix a governance problem with more technology,” Raju Chellam warned in the podcast.

AIQURIS addresses this by supporting end-to-end AI governance. From generating AI adoption agreements and regulatory requirements to tracking mitigation efforts in real time. This ensures AI systems don’t just meet compliance but evolve with governance best practices. It provides a proactive framework, including real-time compliance tracking, AI risk profiling, and automated, audit-grade documentation aligned with international standards like ISO/IEC 42001 and the NIST AI RMF.

Spotlight: AIQURIS AI Use Case Risk Profiling

AIQURIS enables organisations to classify and manage AI risks across six critical pillars: Safety, Security, Legal, Ethics, Performance, and Sustainability. This expert-led process structures AI risk exposure, aligning each use case with evolving regulatory demands and internal governance.

Risk-based profiling is essential in a landscape where AI adoption often outpaces ethical and operational readiness. It equips enterprises to decide which systems to scale, which to delay, and where targeted safeguards must be introduced.

The Call for a Global AI Governance Body

Both experts echoed the call for a global AI oversight organisation. Dr Saerbeck pointed out that without cross-border cooperation, even the most well-intentioned national frameworks will fall short.

Raju and Martin also discussed new models of stakeholder collaboration by bringing together ethicists, engineers, and impacted communities so that AI regulation is not just top-down, but co-created.

This aligns with global movements calling for an AI equivalent to the International Atomic Energy Agency (IAEA), a body that can monitor, enforce, and standardise how AI is governed at scale.

Final Thoughts

As AI’s capabilities expand, so does the responsibility of those who build, deploy, and regulate it. The discussion between Dr. Saerbeck and Raju Chellam is a powerful reminder that AI governance is not just about managing risk, it’s about safeguarding humanity’s future.

AIQURIS stands at the forefront of this mission while offering a complete AI governance stack that includes use case risk profiling, compliance requirement generation, governance maturity assessments, mitigation tracking, AI adoption agreements, and real-time monitoring. It’s not just about oversight, it’s about future-proofing AI. Contact us to know how we can help you.

Watch the full podcast on YouTube today

Balancing Innovation & Regulation: Navigating the Ethics of AI w/ Raju Chellam and Martin Saerbeck

  1. Our World In Data
  2. Open Gov
  3. CNBC Africa

AI news & stories

We’ll send you the best AI & Tech content, only once a month. We Promise!

Share This Post

  • Share to Linkedin
  • Share to Facebook
  • Share to WhatsApp