What You're Defending
You have designed an AI-ESG governance system. It is now being audited by a hostile stakeholder—a regulator, a plaintiff's attorney, or a rival board faction—who believes the system is a "Liability Sponge" in disguise. Your job is to prove it isn't.
This Capstone is a simulation of an audit defense meeting. You must present five integrated artifacts that prove your system has:
- 1. Transparency – The auditor can see exactly how decisions are made
- 2. Accountability – Every actor knows their role and risk
- 3. Resilience – The system catches its own failures
- 4. Authority – Humans can actually say "no"
Why This Capstone?
Most "audits" are theater. The auditor asks questions; the company gives pre-written answers. This Capstone inverts the power dynamic: you are the auditor, building the system that others cannot trick. When external auditors arrive (and they will), you will already know how to answer their toughest questions—because you've asked them yourself.
Pre-Assessment Checklist
- Completed L1-M0 (Liability Sponge)
- Completed L2-M3 (Evidence Ladder)
- Completed L3-M5 (Bias Forensics)
- Completed L3-M8 (Operational Controls)
- Have a real or realistic system in mind
Assessment Format
Duration: 3-4 hours (self-paced)
Deliverables: 4 written artifacts
Format: Presentation-ready
Grading: Pass/Fail on rubric
Certificate: Certificate of Completion upon passing
Episodes 7.1 & 7.2: Context
Episode 7.1: The Audit Defense Brief
You are called into a board room. An external auditor has flagged your AI-ESG system as a potential "Liability Sponge"—a machine-speed loop with a human rubber stamp. The auditor doesn't believe humans can actually say no. Your job is to prove they can.
- • The auditor's skepticism is rational, not hostile
- • "Trust" is not a defense; "Evidence" is
- • Stop-the-Line authority must be exercisable, not just documented
- • Bias harms vulnerable suppliers; you must prove you catch it
Episode 7.2: The Failure-Mode Deposition
Under questioning, you must pre-register all the ways your system could break: hallucination, bias drift, data tampering, model poisoning. For each failure, you must show: how you detect it, how you stop it, and what evidence proves you've contained it.
- • "We haven't seen that failure" is not an acceptable answer
- • Failure modes must be pre-registered to avoid bias
- • Detection precedes remediation
- • Evidence is the currency of credibility
The Four Deliverables
Each deliverable is a separate artifact. Together, they form the Sociable Assurance Blueprint.
Transparency Audit & Fairness Forensics
Apply forensic methods to detect bias and transparency gaps
Can you identify where your system will harm vulnerable suppliers? The auditor will show you a "Black Box" vendor report (e.g., a credit score model or ESG evaluation) and ask: where is the bias? What data is missing? What populations are hurt?
Your TaskSuggested length: 2-3 pages
Include: Executive Summary + Data Analysis + Bias Narrative + Remediation Plan
Visual: At least one chart showing disparate impact across regions or demographics
- Uses a named statistical method (e.g., disparate impact ratio, chi-square test)
- Identifies gaps specific to your system (not generic)
- Proposes remediation with explicit cost/benefit trade-off
- Defines the appeal process (e.g., "Supplier can request manual review if rejected due to missing field X")
Your ESG vendor's model has 98% accuracy overall. But when you segment by "Supplier has
published sustainability report" vs. "No report," you find:
• With report: 99% approval rate
• Without report: 45% approval rate
This is missing data bias. Small suppliers in developing regions are systematically
excluded. Your remediation: require manual appeal + synthetic data imputation.
Accountable Workflow Design
Kill the Liability Sponge. Prove humans can say no.
The auditor will ask: "Walk me through a transaction. At what point can your staff reject the AI's recommendation?" If you can't point to a specific, checkable moment—you have a Liability Sponge.
Your TaskSuggested length: 1 diagram + 2-3 pages of narrative
Diagram: Swimlane flowchart (AI, Reviewer, Manager, Compliance) with decision gates
Include: Time budgets, escalation rules, fallback protocols
- Shows a human who can actually veto the AI (with named authority)
- Defines "Stop-the-Line" triggers explicitly (not vaguely)
- Proves humans have time (e.g., "1,000 transactions/day ÷ 8 reviewers = 125 decisions/person ÷ 8 hours = 16 mins/decision")
- Shows what happens if a human overrides the AI (audit trail, escalation, etc.)
Scenario: AI recommends approval of a high-value supplier. Reviewer notices
the supplier's ESG score is missing Section 5 (Labor Practices). Reviewer's authority: Pause
the transaction, request the missing data, or reject entirely if the supplier won't provide
it.
Evidence: Reviewer initials the document. If there's a dispute, you have
timestamped record of the decision.
Sociable Assurance Blueprint (RACI)
Define decision rights. Eliminate the Liability Sponge role.
The auditor will ask: "Who is Responsible? Who is Accountable? Who Consulted? Who Informed?" This is the RACI matrix. The auditor is looking for a "Liability Sponge" role—someone who is Responsible but not Accountable (i.e., they get blamed but don't get to decide).
Your TaskSuggested length: RACI table + 1-2 pages of narrative
Rows: Roles (Procurer, Reviewer, Compliance Lead, CTO, CFO)
Columns: Decision types (Data Quality Check, Vendor Approval, Bias Detection, Exception Override)
- Every critical decision has exactly one "A" (not multiple, not zero)
- The "A" has real authority (can say no, can override the AI)
- Defines what happens when "R" and "A" disagree (escalation, voting, etc.)
- No role is "R" without also being "A" or "C" (Consulted)
Decision: "Override AI recommendation to approve supplier"
• Responsible (R): Procurement Manager (executes the override)
• Accountable (A): Compliance Lead (signs off; liable if wrong)
• Consulted (C): CTO (provides technical risk assessment)
• Informed (I): CFO (gets post-decision summary)
Dispute Resolution: If Procurer and Compliance disagree, CFO decides.
Failure-Mode Register
Pre-register failures. Prove you catch them.
The auditor will ask: "What can go wrong?" And then: "How do you know when it's happening?" If you can't answer both questions, you have a blind spot. A failure-mode register forces you to pre-identify risks before they hurt someone.
Your TaskSuggested length: Risk register table + 1 page of narrative
Columns: Failure Mode | Likelihood | Impact | Detection Method | Containment Action | Evidence of Resolution
Format: Spreadsheet or table (clear, auditable)
- Includes at least 5 distinct failure modes (not duplicates)
- Each failure mode has a named detection method (not "We will monitor")
- Each failure mode has a containment action (not "We will investigate")
- Evidence is defined in advance (e.g., "A/B test showing model retraining fixed bias" or "Null counts on dashboard show zero hallucinations in last 7 days")
Failure Mode: Model Hallucination (AI generates ESG scores from imaginary sources)
Likelihood: Medium | Impact: High (false approval of non-compliant suppliers)
Detection: Automated quote verification: For every score > threshold, extract the source citation. If citation does not appear in the input document, flag as "Unverified" and route to manual review.
Containment: Pause vendor approval. Route to AI Engineering team. Retrain model on cleaner dataset or switch to deterministic scoring.
Evidence of Resolution: Dashboard shows "Hallucination Rate" = 0% for 30 days post-retrain. Spot-check 10 random approvals; verify all sources are correctly cited.
Reconciliation & Rehabilitation Strategy
The "Killer Question" Defense
The "CFO Killer Question": "How do we rehabilitate a non-compliant supplier without losing their data history?" And the "Financial Check": "Does the carbon ledger match the checkbook?"
Your Task- Delta Report shows <0.05% variance between Invoice Totals and Carbon Activity Data
- Restoration Plan includes specific capacity-building steps (not just punishment)
Grading Rubric
All deliverables are graded on a Pass/Fail basis. You must pass all four to earn the certificate.
Deliverable 1: Fairness Forensics
Deliverable 2: Accountable Workflow
Deliverable 3: RACI Matrix
Deliverable 4: Failure-Mode Register
Deliverable 5: Reconciliation & Restoration
Overall Passage Criteria
You earn a Certificate of Completion if you achieve "Meets Criteria" or higher on all five deliverables.
Resubmission Policy
If you do not meet criteria on one deliverable, you may revise and resubmit once.
Timeline
Expected turnaround for feedback: 5-7 business days. Resubmissions within 3 days.
Suggested Work Timeline
(3-4 hours total, self-paced)
Preparation & System Selection
Choose your AI-ESG system (real or realistic case). Review previous module outputs.
Deliverable 1: Fairness Forensics
Write the bias analysis. Include statistical evidence and remediation plan.
Deliverable 2: Accountable Workflow
Draw the swimlane diagram. Define Stop-the-Line triggers and review time budgets.
Deliverable 3: RACI Matrix
Build the RACI. Identify and eliminate Liability Sponges. Define dispute resolution.
Deliverable 4: Failure-Mode Register
Document 5+ failure modes with detection, containment, and evidence. Rank by risk.
Review & Submit
Ensure all deliverables meet rubric criteria. Compile into presentation-ready format.
Submission & Certification
How to Submit
- 1. Compile all 4 deliverables into a single PDF or shared document
- 2. Include your name, date, and system description (1 paragraph)
- 3. Submit via course portal or email to [contact]
- 4. Receive grading feedback within 5-7 business days
Certificate
Upon passing all four deliverables, you will receive a Certificate of Completion for the AI-ESG Integrated Strategist (AEIS) curriculum.
This certificate is not an accredited qualification and does not confer any professional license or statutory authority.