The Fragmented World of AI Governance
13 frameworks, one regulatory floor
The organizations that will lead in AI governance over the next five years are not the ones waiting for a dominant standard to emerge. They are the ones building governance capability now; while the space is still fragmented, while the frameworks are still evolving, while the window to get ahead of regulatory pressure is still open.
This is the first in a ten-part series examining the state of AI governance: what the frameworks actually say, what the EU AI Act actually requires, and what organizations that take this seriously are already doing.
The Framework Landscape
Open any directory of AI governance frameworks and you will find a landscape that looks, at first glance, like chaos. AIGN, AI SAFE² / AISM, AgentID, Microsoft Responsible AI Maturity Model, ARC Framework, Agentic Governance Framework v2.1, Singapore IMDA, MI9, AI Gateway, GaaS, WEF, Microsoft Frontier Governance Framework, Digital Applied. Thirteen frameworks and counting, each with different authors, different scopes, different maturity claims, and different definitions of what “governance” actually means.
This is the surface objection we hear most often: *how can we build governance when we cannot even agree on which framework to use?*
But this framing misunderstands what the frameworks are; and what the EU AI Act is.
The EU AI Act is not a governance framework. It is a legal floor. It defines what is mandatory, what is prohibited, what is enforceable, and what the consequences of non-compliance are. The governance frameworks (AIGN, AISM, NIST AI RMF, ISO 42001) are implementation substrates. They help organizations organize their thinking, structure their controls, and demonstrate due diligence. None of them replaces the EU AI Act. All of them can be positioned against it.
Understanding this distinction is itself a competitive advantage. Most organizations are either ignoring governance frameworks entirely (treating the EU AI Act as the only game in town) or treating governance frameworks as if one of them will become the SOC2 equivalent for AI. Neither position is correct.
What the Frameworks Actually Measure
The thirteen frameworks are not interchangeable. They differ across six dimensions that matter for organizations making decisions about where to invest governance resources.
Maturity : Early (draft or in-progress), Structured (formal but voluntary), Operational (active tooling or product), or Mature (broad adoption with formal certification path). AIGN and Microsoft RAI Maturity Model sit at the operational or mature end. Singapore IMDA and the WEF framework sit at the early end despite their institutional backing.
Scope : Policy-only frameworks describe what should be documented. Technical control frameworks specify how controls should work. Certification frameworks add a trust label or formal certifiable path. Full lifecycle frameworks cover design through run through retire. AIGN and AISM target full lifecycle; AI Gateway targets technical controls with a commercial product.
Agentic AI Specificity : Generic frameworks address AI systems in general. Explicit frameworks add specific provisions for AI agents without redesigning the framework around them. Agentic-native frameworks are designed from the ground up for autonomous reasoning, planning, and action. Singapore IMDA and MI9 are agentic-native. Microsoft RAI Maturity Model is explicit. Most older frameworks are generic.
Regulatory Alignment : Some frameworks are mapped to the EU AI Act and ISO 42001 as reference points. Some are benchmarked against multiple standards. AIGN explicitly maps to EU AI Act, GDPR, NIS2, and DORA. AISM maps to NIST AI RMF, ISO 42001, EU AI Act, CSA AICM, NIST CSF 2.0, MITRE ATLAS, and OWASP LLM. Other frameworks make no regulatory claim at all.
Runtime Governance : This is the most significant dimension of differentiation and the least understood. Most governance frameworks operate at design time or policy time : they tell you what controls to put in place before a system goes live. MI9 is the only framework with formal runtime governance: it operates continuously during live execution using FSM-based temporal conformance and statistical goal-conditioned drift detection. The gap between design-time governance and runtime governance is where most agentic AI failures occur.
Certification : None (self-assessment only), trust label (AIGN offers this), or formal certifiable (ISO 42001 is the closest to a certifiable path for AI governance). Most frameworks offer neither.
The crosswalk that most organizations find useful is between the EU AI Act, NIST AI RMF, and ISO 42001. These three are not competitors : they are complementary. ISO 42001 provides the management system backbone (certifiable). NIST AI RMF provides the operational risk methodology inside those management system clauses. EU AI Act provides the legal overlay for systems in scope. Approximately 70–80% of controls serve all three frameworks simultaneously — evidence collected once maps to all three.
Why the Fragmentation Is a Signal, Not a Problem
The proliferation of frameworks is not a sign that AI governance is immature. It is a sign that the problem is large enough that many serious organizations are trying to solve it from different angles. The EU AI Act’s existence changes the terms of the conversation.
With the EU AI Act as the regulatory floor, the question is no longer “which framework should we adopt?” It is “which framework helps us demonstrate compliance with our binding obligations in the most operationally efficient way?”
Organizations that anchor to the EU AI Act first, then use governance frameworks as implementation tools, find that the framework selection question becomes much more tractable. The answer is usually: use ISO 42001 as the structural backbone, use NIST AI RMF as the operational risk methodology, and use the EU AI Act’s specific obligations as the checklist that determines what else must be added.
This is the approach that organizations with existing ISO certifications typically take; ISO 42001 is already presumed to satisfy Article 16 of the EU AI Act (the quality management system requirement). If you are already ISO 42001 certified, you have the structural foundation. The remaining work is the gap analysis: what do your obligations under the EU AI Act require that your current ISO 42001 implementation does not yet cover?
What You Cannot Do Without Understanding the Landscape
Organizations that treat AI governance as a compliance checkbox (implement a policy, file it, move on) will find themselves in a difficult position when enforcement accelerates in August 2026. The EU AI Act’s enforcement powers become fully operational at that point, and the national authorities responsible for high-risk AI system obligations will begin examining organizations in their jurisdictions.
More importantly, organizations that do not understand the framework landscape cannot have an informed conversation with their board, their legal team, or their technical leadership about where they actually stand. Governance requires a shared vocabulary. The vocabulary is not “are we compliant?”; it is “where are we on maturity, what is our scope, what agentic specificity do we have, what regulatory alignment have we achieved, and do we have runtime governance or only design-time controls?”
The frameworks exist to answer those questions systematically. The EU AI Act exists to make the answers matter legally.
The Right First Question
The question to ask is not “which framework should we use?” It is “what is our current state across the dimensions that matter, and what do we need to build to meet our binding obligations?”
Answering that question correctly requires understanding what each of the six dimensions actually means in your organization; not in the abstract, but in the specific context of what you are building, how your agents operate, what data you access, and who your deployers are.
That is where the real work begins. And it begins with a honest assessment, not a framework selection.
Next: Why the EU AI Act Applies to You — Even If Your HQ Is Not in the EU
Download our free EU AI Act vs. NIST vs. ISO 42001 Quick Reference Decision Guide
*


