AI Governance Artifact Templates

Operational guides, rubrics, and templates for responsible AI deployment

What Are These Templates?

These five artifacts operationalize the five foundational AI skills taught in the AI-ESG curriculum. They are working documents designed for teams building and deploying AI systems—not compliance theater.

Use them to: Justify deployment decisions, design prompts, architect workflows, evaluate outputs, and document ethical guardrails.

1
Strategic Brief
AI Strategy
2
Prompting Toolkit
Prompting
3
Workflow Blueprint
Workflow Integration
4
Evaluation Rubric
Critical Evaluation
5
Ethics Memo
Ethics & Trust

Strategic AI Brief

AI Strategy Skill | Deployment Decision Framework

Justifies an AI deployment to business and risk stakeholders. Defines opportunity, risk tolerance, control architecture, financial model, and governance checkpoints.

Use When: Proposing a new AI capability (e.g., "Deploy AI ticket classifier"), seeking budget approval, or needing sign-off from risk/compliance teams.
Contents:
  • Executive summary (GO / conditional GO / NO GO recommendation)
  • Business problem, AI capability, deployment context
  • Risk threshold definition (critical/major/minor/negligible consequences)
  • Control architecture (pre-action controls, circuit breakers, audit trails)
  • Financial model (cost, savings, ROI, risk-adjusted scenarios)
  • Governance checkpoints (pilot phases, quarterly reviews, annual renewal)
  • Sign-off matrix (business sponsor, risk/compliance, tech lead)
Foundational Skills:
AI Strategy Critical Evaluation Ethics & Trust

Prompting Toolkit

Prompting Skill | Intent-First Design & Iteration

Guides prompt design from intent definition through testing, iteration, and production monitoring. Model-agnostic methodology works across Claude, GPT-4, Gemini, and future models.

Use When: Designing a new prompt, comparing models, optimizing for accuracy, migrating to a new model, or retraining an underperforming prompt.
Contents:
  • Intent-first framework (task, success criteria, failure mode, context)
  • Core instruction template (model-agnostic core)
  • Model-specific adjustments (Claude, GPT-4, Gemini tweaks)
  • Test dataset design (5–10 labeled examples covering edge cases)
  • Evaluation matrix (compare outputs across models)
  • Iteration rules (when to refine, how to improve)
  • Production prompt library (version control, changelog)
  • Monitoring dashboard (weekly drift detection, quarterly refresh cycle)
Foundational Skills:
Prompting Critical Evaluation AI Strategy

Workflow Blueprint

Workflow Integration Skill | AI-Native Process Design

Designs an AI-native workflow where AI is embedded in the core decision path (not "off to the side"), with human checkpoints, stop cards, and audit trails baked in from the start.

Use When: Architecting a new AI-assisted process, defining handoff rules between AI and humans, designing pilot & rollout phases, or documenting SLAs and escalation paths.
Contents:
  • Workflow diagram (visual flow of AI decisions, human checkpoints, escalations)
  • Inputs & outputs (what data flows in/out, source, format, frequency)
  • AI task definition (what the AI does, latency SLA, cost, what it does NOT do)
  • Decision logic & stop cards (confidence-based routing, circuit breaker conditions)
  • Handoff points (AI → human review, human → AI learning loop, escalation path)
  • Audit trail specification (JSON schema for every decision, retention policy)
  • KPI monitoring (daily dashboard, weekly spot-check, monthly review)
  • Pilot & validation phases (week 1–4 rollout strategy with go/no-go criteria)
Foundational Skills:
Workflow Integration Critical Evaluation AI Strategy

Output Evaluation Rubric

Critical Evaluation Skill | Quality Assessment & Tracking

Provides explicit, repeatable criteria for assessing AI output quality. Scales from 10 to 1000s of evaluations while maintaining consistency.

Use When: Establishing baseline accuracy, spot-checking outputs, conducting blind evaluations, feeding results into model retraining, or proving system safety to regulators.
Contents:
  • Evaluation dimensions (correctness, confidence calibration, reasoning quality, boundary handling)
  • Rubric template with example (classification tasks, text generation, data analysis)
  • Evaluation tracking sheet (test case log, weekly summary, root cause analysis)
  • Blind evaluation protocol (3-rater inter-rater agreement >80%)
  • Automation options (human + sampling, AI-assisted evaluation, automated test suite)
  • Continuous improvement loop (monthly aggregation, prompt retraining, deployment)
Foundational Skills:
Critical Evaluation Prompting Workflow Integration

Ethics Impact Memo

Ethics & Trust Skill | Guardrails & Governance

Documents ethical risks, hard guardrails, and trust metrics. Ethics is treated as system design, not compliance theater. Guardrails are engineered to survive model upgrades and organizational pressure.

Use When: Designing a new system, preparing for regulatory audit, documenting risk mitigation, or proving to customers that your system is trustworthy.
Contents:
  • System overview (name, owner, model(s), deployment date, review schedule)
  • Ethical risks (bias, hallucination, over-reliance, false negatives, data leakage)
  • Risk mitigation for each (testing, monitoring, escalation)
  • 5 hard guardrails (category boundary, safety escalation, confidence threshold, audit trail, prompt versioning)
  • 5 trust metrics (accuracy by segment, safety detection rate, false escalation rate, human override rate, harm tracking)
  • Decision records (why we made this choice, alternatives considered, conditions for change)
  • Governance checklist (monthly/quarterly/annual reviews)
Foundational Skills:
Ethics & Trust Critical Evaluation AI Strategy

How These Templates Work Together

Phase Question Use This Artifact Output
Planning Should we deploy this AI? Strategic AI Brief GO/NO GO decision, sign-off from stakeholders
Design How do we prompt the model? Prompting Toolkit Production prompt library v1.0, test results
Architecture How does AI fit into our process? Workflow Blueprint Workflow diagram, decision logic, SLAs, audit trail spec
Validation Is the output actually good? Output Evaluation Rubric Baseline accuracy, weekly monitoring, improvement recommendations
Governance Is it trustworthy & safe? Ethics Impact Memo Risk register, guardrails, trust metrics, quarterly reviews

Recommended Reading Order

1️⃣ READ: Strategic AI Brief └─ Understand your deployment goal, risk tolerance, stakeholder needs 2️⃣ READ: Ethics Impact Memo └─ Name the risks and design guardrails BEFORE you build anything 3️⃣ BUILD: Prompting Toolkit └─ Design your prompt, test it, establish baseline accuracy 4️⃣ BUILD: Workflow Blueprint └─ Define decision logic, human checkpoints, audit trail, monitoring 5️⃣ VALIDATE: Output Evaluation Rubric └─ Measure quality continuously, feed results into prompt retraining 🔄 REPEAT: Monthly prompt refresh + quarterly governance review

Why this order? Strategy first (clarify what you're building). Ethics first (design for safety, not as an afterthought). Then design → build → validate → iterate.

Linked to AI-ESG Curriculum Modules

Module Sci-Fi Metaphor Recommended Template
Module 1: The 201 Gap The Teleporter Problem / Jagged Frontier Strategic AI Brief (define frontier & risk threshold)
Module 2: Framing the Relationship HAL 9000 / JARVIS Prompting Toolkit (design transparent AI instructions)
Module 3: Unmasking the Liability Sponge The Red Shirt / Tricorder Workflow Blueprint (define human accountability, evidence collection)
Module 4: You Are The Liability Sponge The Asimov Constraint Output Evaluation Rubric (measure circuit-breaker effectiveness)
Module 5: Escaping the Liability Sponge The Lucas Cycle / Seil Prompting Toolkit (hard-code wisdom, version control)
Module 6: The Refusal Stack The Refusal Stack Ethics Impact Memo (defense-in-depth guardrails)
Module 7: The Upside The Mentat Workflow Blueprint (human-AI partnership design)

AI-ESG Integrated Strategist Curriculum

Artifact Templates Index | Foundational Skills Framework

← Back to Rosetta Stone | Strategic Brief | Prompting | Workflow | Evaluation | Ethics