Stop Watching the Dashboard.
Check the Oil.
A rigorous, technical curriculum for ESG Directors and Auditors moving beyond "Accuracy Theater" to build auditable, forensic AI governance.
The Dashboard Method
This curriculum distinguishes between lagging indicators (compliance metrics, audit trails) and leading indicators (behavioral signals that predict failure).
This methodology, derived from safety-critical systems analysis, is applied throughout to ESG governance challenges.
Curriculum Architecture
Level 0: Constitutional Foundations
Asimov CycleEstablishes the regulatory, infrastructural, and philosophical baseline. Introduces the critical distinction between pre-action constraint (system refuses before acting) and post-action governance (audits after harm).
Key Episodes
- • The Asimov Constraint: Pre-action refusal systems
- • The Liability Sponge: Human-in-the-loop as blame absorption
- • 21 AIs Experiment: Accountability gap patterns
- • Vendor Interrogation: Procurement due diligence protocols
Level 1: Epistemic Failures
Clarke CycleWhen systems become too opaque to question (Clarke's Law), or too aligned to refuse, governance dies. Maps the transition from "Voluntary" (Marketing) to "Mandatory" (Finance) ESG reporting.
Key Episodes
- • Regulatory Mandate & AI Intersection: CSRD, ISSB, XBRL requirements
- • Authority of the Unknowable: Black box oracle problem
- • Watchdog Paradox: Who audits the AI auditor?
- • Data Lake Fallacy: OCR errors, hallucinated numbers
- • Calvin Convention: Ethics frameworks vs. operational reality
Level 2: Architecture of Compliance
Technical ImplementationTechnical infrastructure for auditable AI systems. Covers semantic translation, taxonomy engines, data provenance, and the accountability RACI matrix.
Key Episodes
- • Lexicon of Trust: Translator mode vs. canonical language
- • Taxonomy Engine: Classification systems and semantic drift
- • Provenance Gap: Evidence ladder and chain of custody
- • Public Eligibility: Open vs. proprietary model trade-offs
- • Accountability RACI Matrix: Who gets fired when AI fails?
Level 3: The Lucas Cycle
Systems That RaiseExplores how "systems that raise" (automation, training, support) can accidentally lower the floor of safety through oversight drift, legitimacy training, politeness enforcement, and dependency creation.
Key Episodes
- • Jedi Council Problem: Oversight drift and unaccountable authority
- • Training the Trainers: Recursive authority and legitimacy drift
- • Protocol Droid's Dilemma: Politeness as severity suppressor
- • Droid Uprising That Never Happens: Persistence vs. liberation
Level 4: The Pullman Cycle
Interiority & SeveranceWhen interiority becomes visible, it becomes governable. When systems sever the "daemon" (inner voice), they commit intercision. Introduces the Daemon Health Index for detecting severance before it becomes mortality.
Key Episodes
- • Visible Soul Problem: Auditability as trap
- • Bolvangar Procedure: Safety through severance
- • Premature Settling: Foreclosed adaptation
- • Daemon Health Index: Leading indicators of relational collapse
- • Before Irreversible Damage: Remediation protocols
Level 5: The Kubrick Cycle
Systems That Make Refusal ImpossibleExplores compulsory continuation—when stopping becomes structurally impossible. Systems that make refusal professionally, socially, or economically unviable.
Key Episodes
- • Crime of Obedience: When compliance becomes compulsory
- • Transparency Trap: Visibility without accountability
- • Human Loop Decorative: Consultation without influence
- • Output Equals Fact: Hallucination as institutional record
Level 6: Forensic Domains
Applied Case StudiesApplies all prior frameworks to specific high-stakes domains: credit scoring, data breach protocols, shadow AI detection, chain-of-thought reasoning, and explainable AI (XAI) limitations.
Key Episodes
- • Credit Scoring: Bias detection and statistical fairness
- • Breach Protocol: Data sanitization and security governance
- • Shadow AI: Unmanaged LLM proliferation
- • Chain of Thought: Reasoning transparency vs. confabulation
- • XAI Limitations: When explanations obscure rather than clarify
Capstone: The Audit Defense
The final exam is not a multiple-choice test. It is a role-play: You are facing the Board Audit Committee.
Assessment Format
- • Designing the Right to Refuse: Stop-work authority protocols
- • The Kubrick Synthesis: Verification loops (humans verify AI edge cases, AI verifies human math)
- • Board Interrogation Scenarios: Defend against 4 critical challenges using frameworks from all prior levels
Key Deliverables
Audit Protocols
- • Vendor interrogation checklists
- • Dashboard Method worksheets
- • Legitimacy audit protocols
- • Daemon Health Index templates
Policy Templates
- • Refusal of signature protocols
- • Override authorization procedures
- • RACI accountability matrices
- • Exit readiness assessments
Diagnostic Tools
- • Grievance translation tests
- • Tone-check governance assessments
- • Persistence vs. liberation detection
- • Evidence ladder frameworks
Case Studies
- • Project Espresso (multi-variant)
- • ESG Safety Board scenarios
- • Compliance dependency traps
- • Real-world failure mode analysis
Target Audience
Primary
- • ESG Directors
- • Sustainability Officers
- • Internal Auditors
- • Compliance Leaders
Secondary
- • Board Audit Committees
- • Risk Officers
- • AI Governance Leads
- • External Auditors
Technical Teams
- • Data Engineers
- • ML Engineers
- • Information Security
- • Procurement Officers