# Core Interaction Engine — Logic Reference & Architecture

## Engine overview

The NetLogo model has a clean separation between **engine** (the interaction rules — universal, never changes) and **configuration** (which agents exist, how they're coupled, and what parameters apply — cluster-specific, client-adjustable).

The engine executes 10 phases per tick, in a fixed order. The order matters — each phase can alter state that subsequent phases read.

```
Per tick:
  1. Natural decay         (agents degrade without investment)
  2. Investment cycles      (periodic refresh restores effectiveness)
  3. Workaround dynamics    (bypasses emerge and spread when controls weaken)
  4. Coupling propagation   (the core: how controls affect each other)
  5. Drift effect           (accumulated small gaps compound into failures)
  6. Adaptive threat        (threat environment responds to system weakness)
  7. External shock         (rare catastrophic events)
  8. Staff turnover         (personnel changes reset human capital)
  9. Audit cycle            (periodic detection and correction of drift)
  10. Clamp and update      (enforce bounds, compute metrics)
```

## Phase-by-phase logic

### Phase 1: Natural decay

Every control degrades every tick. The rate is:

```
effective_decay = base-decay-rate × category-decay-base × (1 + budget-pressure)
```

Where `base-decay-rate` is a global slider (how fast things degrade in this organisation), `category-decay-base` is a per-agent property set at creation (people controls = 1.5, tech = 1.0, physical = 0.7, org = 0.5 — reflecting that human awareness fades faster than policy documents become stale), and `budget-pressure` is a global slider (resource scarcity accelerates all degradation).

Decay also feeds the **drift accumulator** — a per-agent variable tracking accumulated procedural/configuration drift. Drift doesn't directly degrade effectiveness in this phase; it accumulates silently until Phase 5.

**Why this matters for DMDU:** Decay is the baseline force that all investment must overcome. Under high budget pressure, the system is running uphill. Scenario discovery will identify the budget-pressure threshold at which investment can no longer keep pace with decay.

### Phase 2: Investment cycles

Every control category receives periodic investment — training (people), patching (tech), policy review (org), inspection (physical). The cycle length is:

```
cycle = investment-cycle-length × category_multiplier
  where category_multiplier = 0.5 (tech), 1.0 (ppl/org), 1.5 (phy)
```

When an investment cycle fires, the effectiveness boost is:

```
boost = investment-effectiveness × (1 - workaround-level × 0.7) × (0.5 + management-attention × 0.5)
```

Three forces interact here: the investment's inherent quality (`investment-effectiveness` slider), the resistance from entrenched workarounds (deeply bypassed controls resist improvement — you can train people but if the workaround is embedded in workflow, the training won't stick), and management attention (investment without management backing is less effective).

Investment also partially resets drift: `drift = drift × (1 - investment-effectiveness)`.

**Why this matters:** This is where the security-usability tradeoff lives. Frequent high-quality investment (short cycles, high effectiveness) keeps controls effective but costs resources. Infrequent weak investment (long cycles, low effectiveness) saves resources but allows decay and drift to compound. The optimal balance depends on the organisation's maturity and threat environment — which is exactly what DMDU explores.

### Phase 3: Workaround dynamics

This is the **social contagion** mechanism — the most distinctive feature of the model.

Workarounds emerge through two mechanisms:

**Self-adoption:** When a control's effectiveness drops below `failure-threshold`, the gap drives workaround adoption:
```
workaround_increase = (failure-threshold - effectiveness) × workaround-contagion-rate
```
The larger the gap, the faster workarounds develop. This models the real-world phenomenon: when a security control is too burdensome or broken, people find ways around it, proportional to how broken it is.

**Social contagion:** Agents adopt the workaround levels of their neighbours (controls they're coupled to):
```
if neighbour_workaround > my_workaround:
  my_workaround += contagion-rate × (neighbour_workaround - my_workaround)
```
This models the spread of "that's how we do things around here" — when one team starts bypassing a control, adjacent teams adopt the same practice through social networks.

**Workaround decay:** When effectiveness is well above the failure threshold, workarounds slowly recede. Good practices crowd out bypasses — but only slowly, and only when the control is working well.

**Why this matters:** Workarounds are the mechanism by which controls that pass audit individually fail systemically. Each individual workaround is rational (the control is too hard to use properly), but aggregate workarounds undermine the ISMS. This is a classic emergence phenomenon — the ABM captures it, a checklist audit cannot.

### Phase 4: Coupling propagation

This is the **heart of the engine**. Four coupling types, each with distinct interaction rules.

All coupling effects are proportional to `normalised-weight × coupling-strength`:
```
impact_base = normalised_weight × coupling-strength
```
Where `normalised_weight` = coupling weight from cluster analysis / max coupling weight in the cluster, and `coupling-strength` is a global slider. This means the cluster analysis directly parameterises the interaction strength — high-weight couplings produce stronger effects.

#### Failure propagation (the cascade mechanism)

```
When source effectiveness < failure-threshold:
  gap = failure-threshold - source_effectiveness
  impact = gap × impact_base
  target.effectiveness -= impact
  if impact > cascade-detection-threshold: count as cascade event
```

This is the mechanism that produces cascading failure. A failed control pushes its failure-coupled neighbours toward failure. Those neighbours, now degraded, push *their* neighbours. The cascade propagates through the cluster's failure-propagation subgraph.

The cascade is **gap-proportional** — a control barely below threshold produces small cascades; a deeply failed control produces large ones. This creates the nonlinear dynamics that scenario discovery exploits.

#### Information flow (the drift accelerator)

```
When source effectiveness < failure-threshold:
  gap = failure-threshold - source_effectiveness
  target.drift_accumulator += gap × impact_base × 0.5
```

Information flow doesn't directly degrade effectiveness — it accelerates *drift*. When a source control fails (threat intelligence stops being current, policies become stale), the downstream controls don't immediately fail, but they start drifting faster. This models the real-world lag: stale threat intel doesn't break your monitoring overnight, but over weeks your monitoring rules become less relevant, your false negative rate creeps up, and eventually you miss something.

#### Resource competition (the scarcity amplifier)

```
When BOTH source and target effectiveness < failure-threshold:
  competition_loss = impact_base × budget-pressure
  target.effectiveness -= competition_loss
```

Resource competition only activates when both controls are already struggling. Two healthy controls sharing a budget don't compete meaningfully. Two failing controls competing for emergency remediation budget erode each other — the "triage death spiral" where limited resources get spread too thin to fix anything properly.

#### Temporal dependency (the blocking constraint)

```
When source effectiveness < failure-threshold × 0.8:
  block_factor = impact_base × (failure-threshold × 0.8 - source_effectiveness)
  target.effectiveness -= block_factor
```

Temporal dependencies only activate when the source is deeply degraded — below 80% of the failure threshold. This models prerequisites: you can't train people on policies that don't exist, you can't patch systems that aren't in your inventory, you can't do incident response without a plan. When the prerequisite is merely weak, the dependent control can still function (imperfectly). When the prerequisite is deeply broken, the dependent control is actively blocked.

### Phase 5: Drift effect (normalisation of deviance)

```
When drift_accumulator > 0.1:
  effectiveness -= drift_accumulator × base-decay-rate
```

Accumulated drift from Phases 1 and 4 now erodes effectiveness. This is the **normalisation of deviance** mechanism — small, individually insignificant deviations compound over time. The 0.1 threshold provides a small buffer (trivial drift is tolerable), but once drift exceeds it, the erosion accelerates quadratically (drift increases decay which increases drift).

**Why this matters:** This is the mechanism that causes organisations to "feel secure" right up until a major incident. Drift is invisible in audits (which measure point-in-time state, not trajectory). Only the ABM captures the accumulation.

### Phase 6: Adaptive threat pressure

```
When system-effectiveness < failure-threshold:
  threat-level += (failure-threshold - system-effectiveness) × threat-adaptation-rate

When system-effectiveness > failure-threshold + 0.2:
  threat-level -= threat-adaptation-rate × 0.5 (but never below initial-threat-level)

Threat level erodes org and tech controls:
  effectiveness -= threat-level × base-decay-rate × threat-novelty
```

The threat environment responds to the system's weakness. This models adversarial adaptation: when an organisation's defences weaken, attackers (and regulatory scrutiny) intensify. When defences are strong, threat actors look for easier targets. The `threat-novelty` slider controls how much the current threats diverge from the threats the ISMS was designed for.

### Phases 7-9: External shock, staff turnover, audit cycle

These are event-driven perturbations rather than continuous dynamics:

**Shock** (Phase 7): Stochastic event. All agents' effectiveness multiplied by `(1 - shock-severity) + random × shock-severity`. Triggers recovery tracking.

**Turnover** (Phase 8): Per-tick probability of staff replacement. New hires start at `org-maturity × 0.6` effectiveness with zero workarounds. Org controls suffer indirect knowledge loss.

**Audit** (Phase 9): Fires at regular intervals. Detects drift proportional to `drift-accumulator`, corrects proportional to `management-attention × investment-effectiveness`. Reduces workarounds through visibility effect.

## Feedback loops in the engine

The engine contains five feedback loops that produce emergent behaviour:

1. **Decay → workaround → resistance to investment → more decay** (reinforcing). As controls degrade, workarounds develop. Workarounds resist investment (the `1 - workaround × 0.7` term). Less effective investment means faster decay. This loop can self-stabilise (when decay is slow and investment is frequent) or spiral (when decay overwhelms investment capacity).

2. **Failure → cascade → more failure** (reinforcing). Failure propagation through coupling links. A failed control pushes neighbours toward failure, which push their neighbours. This is the catastrophic cascade loop. It self-limits when controls reach zero (floor effect) or when the cascade exhausts the failure-propagation subgraph.

3. **Weakness → threat adaptation → more weakness** (reinforcing). Low system effectiveness attracts threat pressure, which further degrades effectiveness. Self-limits at threat-level = 1.0 and at the initial-threat-level floor when recovery occurs.

4. **Drift → erosion → more drift** (reinforcing). Accumulated drift erodes effectiveness, which increases decay, which increases drift. This is the "slow then sudden" loop. Self-limits when audits detect and correct drift, or when the system hits the floor.

5. **Effectiveness → workaround decay → effectiveness** (balancing). When controls are effective, workarounds slowly recede. This is the recovery mechanism — a virtuous cycle that operates when the system is above the failure threshold.

The interplay of these five loops, parameterised by 18 sliders, produces the complex adaptive system dynamics. Scenario discovery (PRIM/CART on BehaviorSpace output) identifies which parameter combinations push the reinforcing loops past their tipping points.

---

## Runtime control addition — architecture

### The problem

The cluster analysis identifies which controls form tightly-coupled subsystems for modelling. But a client running the simulation may want to add controls that were excluded — because they observe couplings the structural analysis missed, or because their specific environment creates interactions the generic model doesn't capture.

We can't offer all 93 as addable (the model would become unmanageably large and the UI would be cluttered). Instead we offer the **bridge controls** — controls that are structurally coupled to the cluster but weren't included because they fell below the clustering threshold. These are the controls that sit at the boundary between this cluster and adjacent ones.

### Implementation approach

The generator identifies all controls that have at least one coupling to or from any cluster member but are not themselves cluster members. These become **optional agents** — present in the model code but initially inactive. The client can toggle them on/off via switches in the interface.

When activated, an optional agent:
- Creates its turtle with properties from the CONTROLS database
- Creates links to/from existing cluster members based on the coupling data
- Participates in all 10 phases of the engine (no special-casing needed — the engine operates on all turtles/links uniformly)

When deactivated:
- The turtle is hidden and excluded from metrics
- Its links are deactivated
- It does not participate in coupling propagation

This is clean because the engine doesn't know or care which agents are "core" vs "optional" — it just operates on the active turtles and links. The distinction is purely an interface concern.
