ISO 42001 gives us the AI control structure. ISO 38507 provides board-level governance principles. ISO 23894 covers AI risk management. The EU AI Act creates regulatory obligations. NIST AI RMF provides the US framework. Here's how they connect through DMDU.
Principles for boards: accountability, transparency, predictability, sustainability. Maps to our tier 2 cluster selection and ABM outcome metrics — board-level questions become scenario discovery targets.
Extends ISO 31000 to AI-specific risks: emergent behaviour, data dependency, opacity. Our scenario discovery replaces the AI risk matrix — with even stronger justification than for information security.
Risk classification (unacceptable, high-risk, limited, minimal), conformity assessment, transparency obligations. Creates compliance dependency couplings and defines the failure consequence space.
Govern–Map–Measure–Manage structure with profiles and tiers. Cross-maps to 42001 domains and provides the US regulatory alignment pathway.
ISO 38507 defines governance principles for boards overseeing AI: accountability, transparency, predictability, and sustainability. These are principle-level objectives, not operational controls. Our DMDU approach makes them operationally testable.
Each principle maps to specific 42001 controls and ABM outcome metrics:
| 38507 principle | 42001 controls | ABM outcome metric | Scenario discovery question |
|---|---|---|---|
| Accountability | A.3.2 (roles), A.5.5 (documentation), A.10.2 (responsibilities) | Accountability chain completeness | Under what conditions does the accountability chain break? (staff turnover × documentation lag × supplier opacity) |
| Transparency | A.8.2 (documentation), A.8.3 (communication), A.8.4 (reporting) | Transparency control effectiveness | When does transparency become window-dressing? (complexity × time pressure × audience capability) |
| Predictability | A.6.4 (testing), A.6.5 (V&V), A.6.8 (monitoring) | Prediction reliability under distribution shift | How much distribution shift before the monitoring system fails to detect degradation? |
| Sustainability | A.3.4 (risk mgmt), A.5.2 (impact assessment), A.9.3 (oversight) | Long-term system-effectiveness trajectory | Is this AI system sustainable under 3-year budget/talent/regulatory scenarios? |
ISO 23894 extends ISO 31000 risk management to AI. It identifies AI-specific risk characteristics: emergent behaviour, data dependency, opacity, and adversarial vulnerability. Traditional implementation uses a risk register with likelihood and impact estimates. Our approach replaces every step:
| 23894 step | Traditional approach | DMDU approach | Tool |
|---|---|---|---|
| Risk identification | Brainstorm AI-specific threats | Map coupling pathways: data dependency chains, lifecycle sequencing gaps, governance tensions | AI coupling → |
| Risk analysis | Estimate likelihood × impact | Simulate AI control degradation: how does training data quality failure cascade through model performance into deployment risk into stakeholder harm? | ABM simulation → |
| Risk evaluation | Rank risks, compare to appetite | Scenario discovery (PRIM/CART): identify which parameter combinations (data quality × model complexity × oversight staffing) produce AI system failures | BehaviorSpace → |
| Risk treatment | Select 42001 controls | Design control configurations robust across the AI risk scenario space — test mitigations in the ABM before implementation | AI coupling → |
Lists AI-specific threats (bias, hallucination, adversarial attack), assigns probabilities, ranks by expected harm. Misses compound failures: data quality decline + model drift + oversight gap occurring simultaneously. Cannot capture emergent bias that appears only at scale.
Maps AI control coupling structure, explores computationally across the parameter space, discovers which combinations produce system-level failures. The "data quality threshold × deployment scale × monitoring sensitivity" tipping point is computable, not guessable.
The EU AI Act (Regulation 2024/1689) creates a risk-based regulatory framework with four tiers. Each tier maps to our AIMS coupling model with different control activation patterns:
Prohibited AI practices: social scoring, real-time biometric surveillance, subliminal manipulation
Annex III systems: healthcare, credit, recruitment, law enforcement, critical infrastructure
Transparency obligations: chatbots, deepfakes, emotion recognition
Most AI systems: spam filters, games, inventory optimisation
The EU AI Act creates three additional coupling dynamics that extend our model:
| Coupling | Mechanism | 42001 controls affected |
|---|---|---|
| Compliance dependency | High-risk classification triggers mandatory conformity assessment, technical documentation, and post-market monitoring — creating a cascade of control activation that doesn't exist for lower-risk systems | A.5.2–A.5.5, A.6.4–A.6.6, A.8.2–A.8.4 |
| Risk classification cascade | An AI system reclassified from limited to high-risk (due to new use case, regulatory interpretation change, or incident) triggers retroactive compliance obligations across the entire control set | All controls — reclassification is a system-wide shock event in the ABM |
| Supply chain propagation | Provider obligations propagate through the AI value chain: providers → deployers → distributors. Each party inherits compliance obligations based on their role | A.10.2–A.10.5 |
The NIST AI Risk Management Framework structures AI governance around four core functions. Each maps to specific ISO 42001 controls and DMDU analytical tools:
| NIST function | Purpose | 42001 mapping | DMDU application |
|---|---|---|---|
| GOVERN | Culture, policies, roles, accountability structures for AI risk | A.2.2–A.2.3 (policy), A.3.2–A.3.4 (organisation), A.9.2 (responsible use) | Governance controls form the foundation tier in the coupling model; their degradation cascades through all downstream controls |
| MAP | Context, scope, risks associated with AI systems | A.4.2–A.4.6 (resources), A.5.2–A.5.5 (impact assessment), A.7.2–A.7.3 (data quality/provenance) | Mapping produces the coupling discovery inputs: system inventory, data dependencies, and impact assessment findings that parameterise the ABM |
| MEASURE | Quantify and track AI risks, performance, trustworthiness | A.6.4–A.6.5 (testing/V&V), A.6.8 (monitoring), A.9.5 (use monitoring) | Measurement controls feed the ABM's adaptive threat and drift mechanisms — measurement quality determines how quickly degradation is detected |
| MANAGE | Prioritise, respond to, and recover from AI risks | A.3.4 (risk management), A.6.6–A.6.7 (release/deployment), A.10.5 (notifications) | Management controls are the ABM's investment and audit mechanisms — their effectiveness determines whether the system recovers from shocks or spirals into failure |
The NIST framework's "profiles" concept (current profile vs target profile with gaps) maps directly to our client calibration tier: the client profile captures the current state, the ABM simulates trajectories toward the target, and scenario discovery identifies conditions that prevent the organisation from reaching its target profile.
| Standard | What it adds | Primary tool | Key coupling types |
|---|---|---|---|
| ISO 42001 | The base: 38 AI controls, 6 coupling types, 4 AI sector clusters | AI coupling → | All 6 types |
| ISO 38507 | Board governance principles → scenario discovery targets for the ABM | ABM simulation → | governance-tension, info-flow |
| ISO 23894 | AI risk methodology → scenario discovery replaces the AI risk matrix | BehaviorSpace → | risk-amplification, data-dependency |
| EU AI Act | Regulatory couplings: compliance dependency, risk classification cascade, supply chain propagation | AI coupling → | lifecycle-seq, risk-amplification |
| NIST AI RMF | Govern–Map–Measure–Manage → maps to coupling discovery → ABM → scenario discovery → recommendations | AI coupling → | info-flow, lifecycle-seq |
The complete platform now covers three management systems with shared architecture:
| System | Standard | Controls | Coupling types | Clusters |
|---|---|---|---|---|
| ISMS | ISO 27001 | 93 | 4 | 4 sectors |
| PIMS | ISO 27701 | 47 | 9 | 4 privacy roles |
| AIMS | ISO 42001 | 38 | 6 | 4 AI deployment |
Cross-references between the three systems (21 AIMS→ISMS links, 41 PIMS→ISMS links) create bridge controls where investment simultaneously strengthens multiple management systems. These bridge controls are the highest-leverage investments for organisations that need integrated GRC.
Start with the AI coupling discovery to see how the 38 controls interact, or explore the ISMS and PIMS tools that underpin every AI management system.
AI coupling → 27001 ISMS → 27701 PIMS →