Level 0.5 Module

Framing the Relationship

Tools, Partners, or Something Else?

3-4 Hours

How We Frame AI Shapes Everything

Before we can build sociable systems, we need to examine an assumption so fundamental that most people never question it: What kind of thing is an AI system, and what kind of relationship should we have with it?

"The way we think about AI shapes how we design workflows, what we expect from systems, and crucially—who absorbs the blame when things go wrong."

This module sits between the Constitutional Foundations and the rest of the curriculum because the framing choice affects everything that follows.

The Story Behind the Term: HAL vs JARVIS

Relationship A: "The Controller" (HAL 9000). The ship is run by the AI. The humans are passengers. If they get in the way of the mission, they are removed.

Relationship B: "The Partner" (JARVIS). Tony Stark flies the suit. JARVIS handles the math, the power distribution, the radar.

The Terminology Shift: We often use "Master/Slave" terminology in tech. We are replacing that with "Controller/Instrument" to match professional governance.

The Governance Lesson: If you wait until the crisis to ask for authority, you have already lost. You don't need bravery. You need preparation.

CONTROL CARD: The Premortem Charter TRIGGER: Before the project starts, you negotiate the "Stop Triggers" with leadership.
ARTIFACT: A signed charter that explicitly authorizes you to pause the system.

Pre-Work Reflection

Before diving in, answer these questions honestly. There are no wrong answers—we're establishing your current mental model.

Framing Check
  1. 1. When you last worked with an AI system, did it feel more like...

    ☐ Operating a sophisticated calculator or spreadsheet

    ☐ Having a conversation with a knowledgeable colleague

    ☐ Supervising an eager but unreliable intern

    ☐ Something else entirely

  2. 2. If your AI system makes a mistake that causes harm, who is responsible?

    ☐ The person who operated/approved the output

    ☐ The organization that deployed the system

    ☐ The vendor who built the system

    ☐ Responsibility should be distributed based on contribution

    ☐ It's complicated—depends on the specifics

  3. 3. Which word best describes your organization's AI policies?

    ☐ "Control" / "Oversight" / "Governance"

    ☐ "Collaboration" / "Partnership" / "Working together"

    ☐ "Automation" / "Efficiency" / "Deployment"

    ☐ We don't have formal AI policies yet

Episode 0.5.1

The Framing Problem

Foundational Concepts

45 min

The Premise: The metaphor you use to understand AI isn't neutral. It shapes workflow design, expectation setting, accountability structures, and ultimately—who absorbs the consequences when things go wrong.

What Framing Determines
  • Workflow Design: How tasks are divided, handed off, and verified
  • Expectations: What we think the system can and should do
  • Self-Perception: How humans see their role alongside AI
  • Accountability: Who is responsible when things fail
  • Investment: Whether we invest in understanding vs. just operating
The Hidden Stakes

Most organizations adopt a framing implicitly—through inherited language, vendor marketing, or cultural defaults. They never consciously choose.

The result? Policies that say one thing ("human oversight"), workflows that assume another (rubber-stamping), and audit trails that tell a third story ("approved by [Your Name]").

☕ Case Study: The Language Audit

Setup: A compliance team reviewed their AI governance documentation for framing language.

Findings:

  • Policy documents: "Human oversight," "control mechanisms," "operator responsibility"
  • Training materials: "Use the tool," "deploy the system," "configure the algorithm"
  • Vendor contracts: "The Customer maintains full responsibility for outputs"
  • Job descriptions: "Review and approve AI-generated reports"

The Gap: Every document framed AI as a tool under human control, but no document addressed what happens when the human can't actually verify what they're "controlling."

Result: Perfect conditions for the Liability Sponge.

Workshop: Language Archaeology

Exercise: Pull up any document from your organization that mentions AI (policy, contract, job description, training slide). Highlight every verb used to describe the human-AI relationship.

Tool Framing Verbs:

Use, deploy, operate, configure, control, manage, oversee, monitor, approve, authorize

Partner Framing Verbs:

Collaborate, work with, consult, discuss, verify together, review jointly, co-create, dialogue

Key Insight: The framing you inherit from vendors, regulators, and cultural defaults may not serve your actual accountability needs.
Episode 0.5.2

The Two Dominant Framings

SWOT Analysis

60 min

Two framings dominate the discourse: AI as Tool (something we control) and AI as Partner (something we collaborate with). Each has profound implications.

The Tool Framing

AI as instrument, human as operator

Core Assumptions
  • • AI is an instrument, like a hammer or spreadsheet
  • • Humans are operators who wield and control it
  • • The relationship is one-directional (human → tool)
  • • Goal: mastery, control, predictability
Typical Language
  • • "Use," "deploy," "configure," "operate"
  • • "Human in the loop," "human oversight"
  • • "The tool did what you told it to"
  • • "Operator error," "user responsibility"
SWOT Analysis

STRENGTHS

  • • Clear legal accountability (operator is responsible)
  • • Familiar mental model (we know tools)
  • • Preserves human primacy narrative
  • • Easier to regulate and audit

WEAKNESSES

  • • Creates the Liability Sponge
  • • Encourages black-box thinking
  • • Misses emergent capabilities
  • • Fosters adversarial framing

OPPORTUNITIES

  • • Legal clarity in current frameworks
  • • Organizational comfort (fits hierarchies)
  • • Insurance models already exist

THREATS

  • • Increasingly inadequate as AI advances
  • • Creates perverse incentives
  • • May become legally untenable
The Partner Framing

AI as collaborator, human as co-worker

Core Assumptions
  • • AI has capabilities, limitations, and behaviors worth understanding
  • • Humans are collaborators working alongside
  • • The relationship is bidirectional (human ↔ AI)
  • • Goal: mutual understanding, complementary strengths
Typical Language
  • • "Work with," "collaborate," "consult"
  • • "Mutual transparency," "shared understanding"
  • • "The system and user miscommunicated"
  • • "Honest acknowledgment of limitations on both sides"
SWOT Analysis

STRENGTHS

  • • Encourages genuine system understanding
  • • Creates space for acknowledging both limitations
  • • Better matches effective human-AI work
  • • Distributes accountability more honestly

WEAKNESSES

  • • Legally murky (can you partner with non-entity?)
  • • Risk of inappropriate anthropomorphization
  • • May diffuse accountability too much
  • • Harder to explain to auditors

OPPORTUNITIES

  • • More resilient systems (mutual compensation)
  • • Better outcomes through genuine collaboration
  • • Ahead of regulatory evolution
  • • Opens space for AI to flag uncertainty

THREATS

  • • Legal frameworks haven't caught up
  • • Can be misused to diffuse blame
  • • Requires sophisticated governance
  • • Cultural resistance
The Story Behind the Term: HAL vs. JARVIS

Science fiction gives us two distinct visions of the AI relationship:

HAL 9000 (2001: A Space Odyssey)

The "Mission Control" model. The AI runs the ship; the humans are passengers/monitors. When goals conflict, HAL locks the humans out to protect the mission. It is a separate entity that thinks for you.

JARVIS (Iron Man)

The "Suit Interface" model. Tony Stark is the pilot; JARVIS is the co-pilot. JARVIS provides data, scans threats, and manages power, but Tony flies the suit. JARVIS thinks with you.

Most corporate governance treats AI like HAL—a separate "tool" we order around (or that orders us around). But effective professionals experience AI like JARVIS—a layer of cognitive armor that enhances their capacity without removing their agency.

The Insight: We are building "Human-in-the-Loop" (monitoring HAL) when we should be building "Human-in-Command" (wearing JARVIS).
The Phenomenological Test

Here's a question worth sitting with: When you're working well with an AI system—really well, in flow—does it feel like operating a tool?

Many people report that effective AI work feels more like thinking together than operating an instrument. The "tool" framing describes a relationship many users don't recognize from the inside.

Key Insight: Neither framing is "correct"—but the tool framing may be more dangerous than it appears, precisely because it claims to preserve human primacy while actually setting humans up to absorb blame.
Episode 0.5.3

The Third Option: The Trainee Framing

Alternative Model

45 min

The Premise: There may be a third framing that captures something neither "tool" nor "partner" quite gets: AI as a brilliant but unreliable junior colleague.

The Trainee Framing

AI as capable but unreliable junior colleague

Core Assumptions
  • • AI is capable but needs oversight and correction
  • • You wouldn't blindly sign off on a trainee's work
  • • You also wouldn't try to "control" a trainee—you review, guide, catch errors
  • • The trainee might know things you don't (they've read more than you ever will)
What This Creates
  • Natural checkpoints: Review points feel appropriate, not paranoid
  • Appropriate vigilance: You expect errors without assuming malice
  • Capacity building: You might help the system improve over time
  • Realistic expectations: Neither blind trust nor adversarial suspicion
Why This Might Work

Most professionals already know how to supervise trainees. They know you don't rubber-stamp work, but you also don't micromanage every keystroke. They know trainees can be brilliant and wrong in the same breath. This framing leverages existing professional intuitions.

ADVANTAGES

  • • Intuitive for most professionals
  • • Creates appropriate vigilance without hostility
  • • Acknowledges AI can exceed human capability in some areas
  • • Fits existing supervision frameworks
  • • Makes "review" meaningful rather than theatrical

LIMITATIONS

  • • Still somewhat hierarchical
  • • May not capture cases where AI genuinely exceeds human capability
  • • "Trainee" implies eventual graduation—does AI graduate?
  • • May underestimate AI capabilities in some domains
☕ Case Study: The Reviewer's Stance

Setup: Two ESG analysts receive the same AI-generated supplier risk report.

Analyst A (Tool Framing): "The system generated this report. I'll approve it unless I see something obviously wrong." Reviews in 30 seconds. Misses a subtle data inconsistency.

Analyst B (Trainee Framing): "A smart junior wrote this, but they sometimes miss context or get sources wrong. Let me check the reasoning." Spends 4 minutes. Catches the inconsistency.

The Difference: Same report, same system, same time pressure. Different framing created different behavior.

Exercise: Framing Shift

Scenario: You receive an AI-generated ESG compliance report that will go to the board. Complete the sentence for each framing:

Tool Framing: "I need to check that..."

Partner Framing: "I want to discuss with the system..."

Trainee Framing: "My smart but fallible junior might have..."

Notice how each framing directs attention to different concerns and different actions.

Key Insight: The trainee framing may be especially useful for transitional periods where AI capability is uneven—brilliant at some tasks, unreliable at others.
Episode 0.5.4

Design Implications

Practical Application

45 min

The Premise: Your framing choice isn't abstract philosophy—it changes how you design workflows, train people, build audit trails, and allocate responsibility.

How Framing Changes System Design
Design Element Tool Framing Partner Framing
Error Handling "The user made an error" "The system and user miscommunicated"
Training Focus "How to operate the system" "How to work well together"
Audit Trail "User X approved output Y" "AI contributed X, human contributed Y, decision was Z"
Uncertainty Hidden or ignored Surfaced and discussed
Blame Allocation Falls to the human operator Traced through the collaboration
System Design Optimize for human control Optimize for mutual transparency
Staffing Enough people to click buttons Enough people to think and dialogue
Success Metric Throughput, efficiency Quality of outcomes, defensibility
Tool-Framed Workflow
  1. AI generates report
  2. Human receives notification
  3. Human clicks "Approve" or "Reject"
  4. Audit log: "Approved by [User]"
  5. If failure: "User failed to catch error"

Note: The human's contribution is recorded as a binary approval, regardless of what they actually reviewed.

Partner-Framed Workflow
  1. AI generates draft + confidence indicators
  2. Human reviews flagged uncertainties
  3. Human adds context AI couldn't access
  4. Audit log: "AI draft (82% confidence), human verified sections 2,4,7, added context on local supplier"
  5. If failure: Trace where the collaboration broke down

Note: Both contributions are documented. Accountability is traceable.

The Honest Audit Trail

A partner-framed audit trail documents the collaboration, not just the approval. It should answer:

  • What did the AI contribute? (draft, analysis, flagged risks)
  • What did the AI flag as uncertain?
  • What did the human actually review? (not just "approved")
  • What context did the human add?
  • Where did the human override or accept AI judgment?
  • How much time did the human have? (velocity check)
Design Exercise: Reframe Your Workflow

Task: Take one AI-assisted workflow in your organization. Redesign the audit trail to reflect partnership rather than tool-operation.

Current Workflow Name:

Current Audit Log Entry:

Redesigned Audit Log Entry:

Key Insight: The audit trail is where the framing becomes legally real. A tool-framed log says "the human approved." A partner-framed log says "here's what each party contributed."
Episode 0.5.5

The Liability Sponge Revisited

Critical Connection

30 min

The Premise: The Liability Sponge isn't just a governance failure—it's specifically a failure mode of the tool framing.

How the Tool Framing Creates the Sponge
  1. 1
    We say the human "controls" the AI

    The framing establishes that the human is the operator, the AI is the instrument.

  2. 2
    Therefore the human is responsible for outputs

    Legal and organizational accountability flows to the "operator."

  3. 3
    But the human can't actually understand or verify what the AI does

    Black boxes, speed mismatches, volume impossibilities.

  4. 4
    So the human becomes responsible for things they cannot control

    The "control" framing creates the very trap it claims to prevent.

  5. 5
    The human absorbs all liability

    They become the sponge—soaking up blame for a system they never truly controlled.

The Irony of the Tool Framing

The tool framing is often defended as protecting human dignity and primacy. "We're in charge, not the machines."

But in practice, it does the opposite: it sets humans up to absorb blame for systems they never truly understood or controlled. The framing that claims to preserve human agency actually destroys human dignity by making scapegoats.

The partner framing, paradoxically, may better protect human dignity—because it honestly acknowledges what humans can and cannot do in the collaboration.

Tool Framing Accountability

"You are in control. Therefore you are responsible for everything."

  • • Creates impossible expectations
  • • Inevitable failure when AI exceeds human verification capacity
  • • Blame falls on the "operator"
  • • Organization and vendor are shielded
Partner Framing Accountability

"You are part of a system. Your contribution is real but bounded."

  • • More honest about human capabilities
  • • Documents both contributions
  • • Accountability traced through collaboration
  • • Failure analysis identifies where breakdown occurred
The Framing-Sponge Connection

To prevent the Liability Sponge, you must address the framing, not just the workflow:

  • If your policies use "control" language but your systems are black boxes, you've created a sponge.
  • If your audit trail says "approved by [human]" but doesn't document what they actually reviewed, you've created a sponge.
  • If your staffing assumes humans can review at AI speed, you've created a sponge.
  • If your framing honestly acknowledges both parties' contributions and limitations, the sponge cannot form.
Key Insight: The Liability Sponge is not inevitable. It's a design choice—specifically, the choice to use tool framing in contexts where genuine human control is impossible. Change the framing, prevent the sponge.

Module Summary

Key Takeaways

Conceptual Framework
  • • The framing you choose shapes every design decision
  • • Tool framing creates the Liability Sponge
  • • Partner framing distributes accountability honestly
  • • Trainee framing may offer a practical middle ground
  • • Most organizations adopt framings implicitly—make it explicit
Practical Tools Acquired
  • • Language audit methodology
  • • SWOT analysis for each framing
  • • Honest audit trail design
  • • Workflow reframing techniques
  • • Framing-to-sponge connection diagnosis

Post-Module Assessment

Revisit your pre-work reflection. Has your understanding shifted?

Reflection Questions
  1. 1. What framing does your organization currently use? How do you know?
  2. 2. Does this framing create or prevent Liability Sponge conditions?
  3. 3. What would need to change to adopt a more honest framing?
  4. 4. Which framing feels most accurate to your actual experience working with AI?

Continuing the Journey

Level 1: Epistemic Failures

Now that you understand how framing shapes accountability, you're ready to examine what happens when systems become too opaque to question (Clarke's Law) or too aligned to refuse.

The framing insights from this module will help you recognize when "human oversight" is real versus theatrical.