top of page
Tier 1: Core Governance Standards

These establish the backbone of the LLM/AI Governance Ecosystem

  • ISO/IEC 42001

  • NIST AI-100-1 (AI Risk Management Framework)

  • NIST AI-600-1 (Gen AI Risk Management Profile)

  • ISO/IEC 23894 (AI risk management)

  • ISO/IEC 23053 (AI lifecycle processes)

  • ISO/IEC 38505 (Data Governance)

  • ISO 8000 (Information Quality)

  • ISO/IEC 27001 / 27701 (AI-adjacent governance)

​

Outcome: A complete management-system and risk spine.

ISO/IEC 42001: AI Management System

Classification
International Standard

 

Issuing Body
International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC)

 

Summary
ISO/IEC 42001 is the first international management system standard dedicated specifically to artificial intelligence. It establishes requirements for creating, implementing, maintaining, and continually improving an AI Management System that governs AI technologies across their lifecycle. Rather than prescribing technical model design, the standard focuses on organizational accountability, risk management, documented controls, and continuous oversight. It applies to any organization that develops, deploys, or uses AI systems and is designed to balance innovation with trust, ethics, regulatory alignment, and stakeholder confidence.

 

Primary Governance Focus

  • Organizational AI governance and accountability

  • Lifecycle risk and impact management

  • Documented controls and audit readiness

  • Continuous monitoring and improvement

 

Read Full Analysis (PDF)

ISO/IEC 23053: Artificial Intelligence Lifecycle Process

Classification
International Standard

 

Issuing Body
International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC)

 

Summary
ISO/IEC 23053 defines a structured set of lifecycle processes for artificial intelligence systems, covering activities from initial conception and design through deployment, operation, and decommissioning. The standard provides a process-oriented framework that enables organizations to manage AI systems systematically across their full lifecycle. Rather than focusing on ethical principles or risk controls in isolation, ISO/IEC 23053 establishes the procedural backbone necessary to embed governance, risk management, and accountability consistently at each stage of AI system evolution.

 

Primary Governance Focus

  • End-to-end AI lifecycle process discipline

  • Integration of governance across lifecycle stages

  • Consistent handoffs between design, deployment, and operations

  • Traceability and process accountability


Read Full Analysis (PDF)

ISO/IEC 27001 & 27701: Information Security & Privacy Management

Classification
International Standards

 

Issuing Body
International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC)

 

Summary
ISO/IEC 27001 and ISO/IEC 27701 establish internationally recognized management system standards for information security and privacy governance. ISO/IEC 27001 defines requirements for implementing and maintaining an Information Security Management System, while ISO/IEC 27701 extends this framework to include privacy information management. In the context of artificial intelligence, these standards provide the trust boundary controls necessary to safeguard data, protect personal information, and enforce accountability across AI-enabled systems and processes.

 

Primary Governance Focus

  • Information security governance and controls

  • Privacy and personal data protection

  • Risk-based security and privacy management

  • Auditability and compliance readiness


Read Full Analysis (PDF)

NIST AI-100-1 (AI Risk Management Framework)

Classification
National Framework (Risk Management)

 

Issuing Body
National Institute of Standards and Technology (NIST), United States

 

Summary
The NIST AI Risk Management Framework (AI RMF) provides a structured approach for identifying, assessing, managing, and governing risks associated with artificial intelligence systems. It is designed to help organizations embed trustworthy AI characteristics such as fairness, reliability, transparency, and accountability throughout the AI lifecycle. Unlike prescriptive standards, the AI RMF offers flexible, outcome-oriented guidance applicable across sectors and organizational sizes. It complements formal management system standards by providing granular risk practices that can be operationalized within broader AI governance structures.

 

Primary Governance Focus

  • AI risk identification and categorization

  • Lifecycle-based risk measurement and monitoring

  • Governance and accountability mechanisms

  • Trustworthy AI outcomes and controls

 

Read Full Analysis (PDF)

ISO/IEC 23894: AI Risk Management

Classification
International Standard

 

Issuing Body
International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC)

 

Summary
ISO/IEC 23894 provides structured guidance for identifying, assessing, treating, and monitoring risks specific to artificial intelligence systems across their lifecycle. It extends general risk management principles into AI-specific contexts, addressing technical, organizational, and societal risks associated with AI design, deployment, and use. The standard is intended to be used alongside AI management systems and governance standards, enabling organizations to operationalize AI risk management in a consistent and auditable manner. It applies across sectors and AI system types, supporting scalable and context-aware risk governance.

 

Primary Governance Focus

  • AI-specific risk identification and treatment

  • Lifecycle-aligned risk controls

  • Integration with organizational risk management

  • Continuous monitoring and escalation

 

Read Full Analysis (PDF)

ISO 8000: Data and Information Quality

Classification
International Standard

 

Issuing Body
International Organization for Standardization (ISO)

 

Summary
ISO 8000 is a family of international standards focused on ensuring the quality, integrity, and governance of data and information across organizational systems. It establishes requirements and principles for defining, measuring, managing, and maintaining information quality throughout its lifecycle. In the context of artificial intelligence, ISO 8000 provides the foundational governance framework for the data and information that AI systems rely on and produce, making it a critical enabler of trustworthy, auditable, and ethically governed AI.

 

Primary Governance Focus

  • Data and information quality governance

  • Information lifecycle management

  • Traceability, accuracy, and completeness controls

  • Integration of information quality into enterprise governance


Read Full Analysis (PDF)

ISO/IEC 38505: Governing Data for Ethical AI Integration, Deployment, and Governance

Classification
International Standard

 

Issuing Body
International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC)

 

Summary
ISO/IEC 38505 provides guiding principles and a governance model for governing data as a strategic organizational asset. It applies the governance principles and model of ISO/IEC 38500 (IT governance) specifically to data, helping governing bodies direct, monitor, and evaluate how data is used, protected, and leveraged across the enterprise. The standard applies to organizations of any size or sector and is foundational for aligning data governance with organizational strategy, risk management, and regulatory compliance.

 

Primary Governance Focus

  • Strategic alignment of data governance with organizational goals

  • Accountability and decision rights for data use

  • Risk management and conformance to legal obligations

  • Effective, ethical, and value-driven use of data


Read Full Analysis (PDF)

NIST AI-600-1 (Generative AI Profile)

Classification
Governmental Risk Management Profile (Companion to NIST AI RMF)

 

Issuing Body
National Institute of Standards and Technology (NIST), U.S. Department of Commerce

 

Summary
NIST AI-600-1 is a companion profile to the NIST AI Risk Management Framework (AI RMF 1.0) that provides targeted guidance on risks unique to generative artificial intelligence (GenAI) and suggested actions for managing them. Designed for voluntary, cross-sector use, the GenAI Profile helps organizations integrate trustworthiness principles into the design, deployment, use, and evaluation of GenAI systems. It identifies risks unique to or exacerbated by generative AI and maps management actions to the AI RMF’s risk governance functions.

 

Primary Governance Focus

  • GenAI-specific risk identification and characterization

  • Tailored risk measurement, mitigation, and documentation

  • Alignment with organizational risk priorities and AI RMF functions

  • Lifecycle and governance adaptation for GenAI systems

 

Read Full Analysis (PDF)

Tier 2: Ethical and Normative Guidelines

These instruments define normative expectations, values, and societal objectives that Tier 1 standards operationalize through governance, risk, lifecycle, and control mechanisms.

  • OECD AI Principles

  • UNESCO Recommendation on the Ethics of AI

  • IEEE Ethically Aligned Design

​

Outcome: Clear mapping from ethics --> governance controls

OECD AI Principles

Classification
International Ethical Guideline

 

Issuing Body
Organisation for Economic Co-operation and Development (OECD)

 

Summary
The OECD AI Principles establish internationally endorsed ethical guidelines for the responsible development and use of artificial intelligence. Adopted by OECD member states and additional partner countries, the principles emphasize human-centered values, fairness, transparency, robustness, and accountability. While not a technical or compliance standard, the OECD AI Principles provide a normative foundation that informs national policy, regulatory frameworks, and organizational AI governance strategies worldwide.

 

Primary Governance Focus

  • Human-centered and rights-respecting AI

  • Fairness, transparency, and accountability

  • Responsible innovation and risk awareness

  • Institutional and societal trust in AI


Read Full Analysis (PDF)

IEEE Ethically Aligned Design (EAD)

UNESCO Recommendation on the Ethics of Artificial Intelligence

Classification
International Ethical Guideline

 

Issuing Body
United Nations Educational, Scientific and Cultural Organization (UNESCO)

 

Summary
The UNESCO Recommendation on the Ethics of Artificial Intelligence is a globally adopted normative framework that establishes ethical principles and policy guidance for the responsible development and use of AI. Endorsed by all UNESCO Member States, it emphasizes human rights, human dignity, inclusion, sustainability, and global equity. Unlike technical standards, the Recommendation provides a comprehensive ethical and societal lens intended to guide national policies, institutional governance, and international cooperation in AI.

 

Primary Governance Focus

  • Human rights and human dignity in AI

  • Equity, inclusion, and social impact

  • Transparency, accountability, and oversight

  • Global responsibility and sustainability


Read Full Analysis (PDF)

Classification
International Ethical Guideline

 

Issuing Body
IEEE Standards Association (IEEE SA), through the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

 

Summary
IEEE Ethically Aligned Design (EAD) is a normative and engineering-oriented guidance framework that promotes the design and deployment of autonomous and intelligent systems that prioritize human well-being, human rights, and accountable governance. EAD provides principles and recommendations intended to move ethics from abstract commitments into practical considerations for system design, data agency, transparency, and accountability. It is not a certification standard, but it has influenced related IEEE ethics standards efforts and provides a structured ethical reference for organizations implementing trustworthy AI programs.

 

Primary Governance Focus

  • Human well-being and rights-centered AI

  • Data agency and responsible data stewardship

  • Transparency, accountability, and governance design

  • Ethical systems engineering and organizational adoption


Read Full Analysis (PDF)

Tier 3: Regulatory Instruments

These instruments introduce legal obligation and enforcement

  • EU AI Act

  • U.S. Executive Orders and federal guidance

​

Outcome: Governance-to-compliance translation

EU AI Act (Regulation (EU) 2024/1689)

Classification
Regulatory Framework (Binding EU Regulation)

 

Issuing Body
European Union (European Parliament and Council)

 

Summary
The EU AI Act establishes a harmonised, risk-based legal framework governing the placing on the market, putting into service, and use of AI systems in the EU. It prohibits certain AI practices, sets requirements for high-risk AI systems, introduces transparency obligations for specific AI uses, and creates rules for general-purpose AI models. The Act applies broadly, including to providers and deployers outside the EU when AI outputs are used in the Union.

​

Primary Governance Focus

  • Risk-tier classification and prohibited practices

  • High-risk system requirements and conformity obligations

  • Transparency duties and documentation traceability

  • Governance bodies, enforcement, and penalties readiness


Read Full Analysis (PDF)

U.S. Executive Order Framework on AI Policy (EO 14179, Dec. 11, 2025, AI federal-preemption order)

Classification
Regulatory Instrument (Executive Orders, binding within the U.S. Executive Branch)

 

Issuing Body
The White House (Executive Office of the President, United States)

 

Summary
This U.S. executive order framework reshapes federal AI governance by revoking Executive Order 14110 and directing the Executive Branch to reduce barriers to AI innovation while establishing a national policy posture that seeks to prevent fragmented state-by-state AI regulation. EO 14179 (January 23, 2025) initiates the revocation and review of actions taken under EO 14110 and directs alignment of federal guidance with the new policy direction. A subsequent executive order (December 11, 2025) establishes a federal policy stance favoring a minimally burdensome national AI framework and creates a litigation task force to challenge state AI laws deemed inconsistent with that policy.

 

Primary Governance Focus

  • Federal policy direction and revocation of prior AI governance actions

  • Preemption posture toward state AI regulation and enforcement strategy

  • Executive-branch governance alignment and policy harmonization

  • Regulatory risk, compliance volatility, and institutional readiness


Read Full Analysis (PDF)

Tier 4: Implementation, Capability, & Reference Frameworks

Supportive frameworks, maturity models, and practical guides that help organizations operationalize the higher-tier standards and obligations.

  • DAMA-DMBOK

  • DCAM

​

These do not have formal normative authority but are pragmatic bridges from Tier 1 frameworks and Tier 3 obligations to meaningful implementation

DAMA-DMBOK: Data Management Book of Knowledge

Classification
Global Data Management Framework

 

Issuing Body
DAMA International

 

Summary
The DAMA-DMBOK (Data Management Body of Knowledge) is a globally recognised framework of best practices for enterprise data management and governance. It defines core principles, terminology, roles, and processes across multiple data management domains, including governance, quality, architecture, and lifecycle functions. The framework serves as a foundational reference for organisations seeking to structure and operationalise data management discipline, ensuring alignment with strategy, compliance, and business value realisation. DAMA-DMBOK complements formal standards by providing a comprehensive body of knowledge rather than prescriptive requirements.

 

Primary Governance Focus

  • Enterprise-wide data governance and stewardship

  • Knowledge areas spanning the lifecycle and quality

  • Roles, responsibilities, and standardised terminology

  • Linking data practices to organisational strategy


Read Full Analysis (PDF)

DCAM - Data Management Capability Assessment Model

Classification
Industry Benchmark / Capability Maturity Framework

 

Issuing Body
EDM Council

 

Summary
DCAM (Data Management Capability Assessment Model) is a best practice framework for assessing, benchmarking, and enhancing an organisation’s data management capabilities. It provides a structured model to evaluate maturity across data management domains—particularly governance, quality, privacy, and architecture—and is frequently employed by regulated industries to align data practices with strategic value, compliance, and risk management. DCAM offers a capability and maturity lens rather than normative requirements, enabling organisations to map current state, identify gaps, and plan improvement roadmaps for data governance and management.

 

Primary Governance Focus

  • Capability maturity assessment

  • Structured governance and quality evaluation

  • Alignment of data practices with business objectives

  • Practical improvement pathways


Read Full Analysis (PDF)

AI Governance Cross-Walk Matrix: Standards, Regulations & Guidelines
A comprehensive mapping of standards, regulations, and guidelines to key AI governance and lifecycle functions.

bottom of page