AI governance — DMDU approach

The AI system under management is itself adaptive — governance must be too

ISO 42001 gives us the AI control structure. ISO 38507 provides board-level governance principles. ISO 23894 covers AI risk management. The EU AI Act creates regulatory obligations. NIST AI RMF provides the US framework. Here's how they connect through DMDU.

38507 Board governance

ISO/IEC 38507 — Governance implications of AI

Principles for boards: accountability, transparency, predictability, sustainability. Maps to our tier 2 cluster selection and ABM outcome metrics — board-level questions become scenario discovery targets.

23894 AI risk management

ISO/IEC 23894 — AI risk management

Extends ISO 31000 to AI-specific risks: emergent behaviour, data dependency, opacity. Our scenario discovery replaces the AI risk matrix — with even stronger justification than for information security.

EU AI Regulation

EU AI Act — Regulation 2024/1689

Risk classification (unacceptable, high-risk, limited, minimal), conformity assessment, transparency obligations. Creates compliance dependency couplings and defines the failure consequence space.

NIST Framework

NIST AI RMF 1.0 — AI Risk Management Framework

Govern–Map–Measure–Manage structure with profiles and tiers. Cross-maps to 42001 domains and provides the US regulatory alignment pathway.

ISO/IEC 38507

Board-level AI governance — from principles to testable scenarios

ISO 38507 defines governance principles for boards overseeing AI: accountability, transparency, predictability, and sustainability. These are principle-level objectives, not operational controls. Our DMDU approach makes them operationally testable.

Each principle maps to specific 42001 controls and ABM outcome metrics:

38507 principle42001 controlsABM outcome metricScenario discovery question
Accountability A.3.2 (roles), A.5.5 (documentation), A.10.2 (responsibilities) Accountability chain completeness Under what conditions does the accountability chain break? (staff turnover × documentation lag × supplier opacity)
Transparency A.8.2 (documentation), A.8.3 (communication), A.8.4 (reporting) Transparency control effectiveness When does transparency become window-dressing? (complexity × time pressure × audience capability)
Predictability A.6.4 (testing), A.6.5 (V&V), A.6.8 (monitoring) Prediction reliability under distribution shift How much distribution shift before the monitoring system fails to detect degradation?
Sustainability A.3.4 (risk mgmt), A.5.2 (impact assessment), A.9.3 (oversight) Long-term system-effectiveness trajectory Is this AI system sustainable under 3-year budget/talent/regulatory scenarios?
Board-level AI governance becomes testable when principles map to ABM parameters: "Are we accountable?" becomes "Under what staff-turnover and documentation-lag conditions does our accountability chain break?"
ISO/IEC 23894

AI risk management — scenario discovery replaces the AI risk matrix

ISO 23894 extends ISO 31000 risk management to AI. It identifies AI-specific risk characteristics: emergent behaviour, data dependency, opacity, and adversarial vulnerability. Traditional implementation uses a risk register with likelihood and impact estimates. Our approach replaces every step:

23894 stepTraditional approachDMDU approachTool
Risk identification Brainstorm AI-specific threats Map coupling pathways: data dependency chains, lifecycle sequencing gaps, governance tensions AI coupling →
Risk analysis Estimate likelihood × impact Simulate AI control degradation: how does training data quality failure cascade through model performance into deployment risk into stakeholder harm? ABM simulation →
Risk evaluation Rank risks, compare to appetite Scenario discovery (PRIM/CART): identify which parameter combinations (data quality × model complexity × oversight staffing) produce AI system failures BehaviorSpace →
Risk treatment Select 42001 controls Design control configurations robust across the AI risk scenario space — test mitigations in the ABM before implementation AI coupling →
Traditional AI risk (23894)

Enumerate → estimate → rank

Lists AI-specific threats (bias, hallucination, adversarial attack), assigns probabilities, ranks by expected harm. Misses compound failures: data quality decline + model drift + oversight gap occurring simultaneously. Cannot capture emergent bias that appears only at scale.

DMDU AI risk

Map → explore → discover tipping points

Maps AI control coupling structure, explores computationally across the parameter space, discovers which combinations produce system-level failures. The "data quality threshold × deployment scale × monitoring sensitivity" tipping point is computable, not guessable.

EU AI Act

Regulatory compliance under deep uncertainty

The EU AI Act (Regulation 2024/1689) creates a risk-based regulatory framework with four tiers. Each tier maps to our AIMS coupling model with different control activation patterns:

Unacceptable

Prohibited AI practices: social scoring, real-time biometric surveillance, subliminal manipulation

Scope exclusion — not modellable

High-risk

Annex III systems: healthcare, credit, recruitment, law enforcement, critical infrastructure

Full 42001 + high-risk cluster weights

Limited risk

Transparency obligations: chatbots, deepfakes, emotion recognition

A.8.2, A.8.3, A.8.4 amplified

Minimal risk

Most AI systems: spam filters, games, inventory optimisation

Baseline controls, enterprise cluster

The EU AI Act creates three additional coupling dynamics that extend our model:

CouplingMechanism42001 controls affected
Compliance dependency High-risk classification triggers mandatory conformity assessment, technical documentation, and post-market monitoring — creating a cascade of control activation that doesn't exist for lower-risk systems A.5.2–A.5.5, A.6.4–A.6.6, A.8.2–A.8.4
Risk classification cascade An AI system reclassified from limited to high-risk (due to new use case, regulatory interpretation change, or incident) triggers retroactive compliance obligations across the entire control set All controls — reclassification is a system-wide shock event in the ABM
Supply chain propagation Provider obligations propagate through the AI value chain: providers → deployers → distributors. Each party inherits compliance obligations based on their role A.10.2–A.10.5
The EU AI Act's phased implementation (2024–2027) is itself a source of deep uncertainty. Organisations building AIMS today are designing for regulatory requirements that have not yet been fully specified — the definitional case for DMDU over compliance checklists.
NIST AI RMF 1.0

Govern–Map–Measure–Manage cross-mapping

The NIST AI Risk Management Framework structures AI governance around four core functions. Each maps to specific ISO 42001 controls and DMDU analytical tools:

NIST functionPurpose42001 mappingDMDU application
GOVERN Culture, policies, roles, accountability structures for AI risk A.2.2–A.2.3 (policy), A.3.2–A.3.4 (organisation), A.9.2 (responsible use) Governance controls form the foundation tier in the coupling model; their degradation cascades through all downstream controls
MAP Context, scope, risks associated with AI systems A.4.2–A.4.6 (resources), A.5.2–A.5.5 (impact assessment), A.7.2–A.7.3 (data quality/provenance) Mapping produces the coupling discovery inputs: system inventory, data dependencies, and impact assessment findings that parameterise the ABM
MEASURE Quantify and track AI risks, performance, trustworthiness A.6.4–A.6.5 (testing/V&V), A.6.8 (monitoring), A.9.5 (use monitoring) Measurement controls feed the ABM's adaptive threat and drift mechanisms — measurement quality determines how quickly degradation is detected
MANAGE Prioritise, respond to, and recover from AI risks A.3.4 (risk management), A.6.6–A.6.7 (release/deployment), A.10.5 (notifications) Management controls are the ABM's investment and audit mechanisms — their effectiveness determines whether the system recovers from shocks or spirals into failure

The NIST framework's "profiles" concept (current profile vs target profile with gaps) maps directly to our client calibration tier: the client profile captures the current state, the ABM simulates trajectories toward the target, and scenario discovery identifies conditions that prevent the organisation from reaching its target profile.

Integration

One model, four AI governance lenses

StandardWhat it addsPrimary toolKey coupling types
ISO 42001 The base: 38 AI controls, 6 coupling types, 4 AI sector clusters AI coupling → All 6 types
ISO 38507 Board governance principles → scenario discovery targets for the ABM ABM simulation → governance-tension, info-flow
ISO 23894 AI risk methodology → scenario discovery replaces the AI risk matrix BehaviorSpace → risk-amplification, data-dependency
EU AI Act Regulatory couplings: compliance dependency, risk classification cascade, supply chain propagation AI coupling → lifecycle-seq, risk-amplification
NIST AI RMF Govern–Map–Measure–Manage → maps to coupling discovery → ABM → scenario discovery → recommendations AI coupling → info-flow, lifecycle-seq
ISMS × PIMS × AIMS

Three management systems, one coupling model

The complete platform now covers three management systems with shared architecture:

SystemStandardControlsCoupling typesClusters
ISMSISO 270019344 sectors
PIMSISO 277014794 privacy roles
AIMSISO 420013864 AI deployment

Cross-references between the three systems (21 AIMS→ISMS links, 41 PIMS→ISMS links) create bridge controls where investment simultaneously strengthens multiple management systems. These bridge controls are the highest-leverage investments for organisations that need integrated GRC.

Explore the AI governance tools

Start with the AI coupling discovery to see how the 38 controls interact, or explore the ISMS and PIMS tools that underpin every AI management system.

AI coupling → 27001 ISMS → 27701 PIMS →