AI & ESG Capability Architect

Bridging the Skills Gap

Strategic Competency Track

Stop Watching the Dashboard.
Check the Oil.

The Master Architecture. A rigorous, technical curriculum for ESG Directors and Auditors. We move beyond "Accuracy Theater" to build auditable, forensic AI governance.

> PROTOCOL: Use tabs as learning path • Export levels as modules • Capstone as assessment kit

Level 0: The Constitutional Baseline

Before we can break the system, we must understand how it is supposed to work. This level establishes the regulatory, infrastructural, and philosophical baseline that the rest of the course will critique.

01 / 07
Episode 0.1

The Asimov Constraint

Pre-Action Ethics

The Premise: We didn't outgrow Asimov's Laws of Robotics—we lost our nerve. The critical distinction is between pre-action constraint (the system refuses before acting) and post-action governance (audits after harm). ESG systems today rely almost entirely on the latter.

The Constitutional Requirement

Pre-Action Refusal: The system must be able to say "I cannot proceed" before generating the report, not explain afterward why it proceeded.

  • Hard constraints encoded in architecture, not policy documents
  • Refusal as default, continuation as exception
  • Speed must not outpace governance
Newsletter Ref: Episode 1: We Didn't Outgrow Asimov (Pre-action vs. post-action constraint).
Episode 0.2

The Liability Sponge

Human in the Loop

The Premise: "Human in the loop" is not a safety mechanism—it is a liability absorption device. When AI acts at silicon speed and humans review at biological speed, the human becomes a crumple zone, absorbing blame for machine errors they lacked the authority to prevent.

The Speed Mismatch
  • Industrial Safety: Circuit breakers trip in milliseconds to save wires that melt in seconds. Intervention outpaces harm.
  • AI Governance: Systems process 1,000 claims/hour; humans review one every 11.5s (Illustrative Math). Impossible math.
  • The Sponge Effect: When the system fails, the audit trail shows a human "reviewed" it. Blame flows downward.
Stop Work Authority

The Alternative: Any human in the loop must possess constitutional authority to halt the system without permission, justification, or career penalty.

Ref: Episode 2: The Liability Sponge (Stop Work Authority vs. High Fidelity).

Episode 0.3

The 21 AIs Experiment

Accountability Gap

The Experiment: Twenty-one different AI models, given the same prompt to design a realistic ESG accountability failure, all converged on the same architecture: bureaucratic middle management. They produced "liability diodes," "moral crumple zones," and verification velocity mismatches—not because they were programmed to, but because these patterns exist in their training data.

☕ Case Study: Project Espresso (Prologue)

Setup: Daniela Reyes, a community liaison, faces 1,247 safety flags to validate in a four-hour window.

Failure Mode: The AI system (CommunitySense) has downgraded a grandmother's water contamination complaint because "el agua está enferma" doesn't match the keyword training set.

Control: Implement a semantic embedding search rather than keyword matching for non-English inputs.

Evidence Artifact: Log entry showing the cosine distance between the complaint and the "contamination" vector class.

Newsletter Ref: Episode 3: The Accountability Gap (21 AIs converge on middle management).
Episode 0.4

Tooling Ecosystem & The Vendor Interrogation

SaaSProcurement

A vendor-neutral dissection of the major players (Workiva, Persefoni, Envoria, Position Green). We strip away the marketing to look at their API capabilities and "Black Box" transparency.

Activity: The Vendor Interrogation Script

You are the CISO. Ask these 3 questions to the Sales Rep:

  1. "Do you train your foundational model on my data? Show me the clause in the ToS that says you don't."
  2. "If I leave your platform, do I get the raw calculation logic, or just the static PDF reports?" (The Lock-In Test).
  3. "Show me the 'Confidence Interval' feature. If the AI guesses a number, does it tell me it's a guess?"
Reference: "The AI Adoption Blueprint: How to Get the AI You Actually Need" (Workiva).