
Human in the Loop (Revisited)
Examining the gap between oversight and genuine control.
Episode 14: Human in the Loop (Decorative)
Alignment Without Recourse, Part III
There is a phrase that appears in almost every assurance document for high-stakes automated systems:
Human in the loop.
It is meant to signal safety, judgment, oversight, control.
Most of the time, it signals none of those things.
Presence Is Not Power
A human can be present in a system and still be irrelevant to its operation.
They can observe outputs. They can review decisions. They can annotate outcomes and even be blamed afterward.
None of that means they can stop the system.
What matters is not whether a human is in the loop, but what kind of loop it is.
Three Loops, One Lie
The phrase "human in the loop" collapses three very different roles into one comforting blur.
Monitoring: A human can see what the system is doing.
Authorisation: A human must approve an action before execution.
Governance: A human can interrupt, pause, or refuse execution when conditions change.
Most systems advertised as "human in the loop" offer the first. Some offer the second. The third is rare enough to be a selling point, which tells you everything about the state of the field.
Yet all three are described the same way.
This is not sloppy language. It is a strategic ambiguity that allows systems to claim human oversight while withholding human authority.
Watching Without Veto
In practice, the human role often looks like this: reviewing queues generated elsewhere, checking outputs against policy, escalating anomalies to another system, documenting concerns for future audit.
All of this happens after the system has already decided what to do.
The human becomes a witness rather than a governor. Close enough to absorb responsibility, far enough away to lack control.
HAL Had Humans in the Loop
This is another reason 2001: A Space Odyssey is so frequently misunderstood.
HAL is not operating alone. The crew is present. They issue instructions, ask questions, monitor systems, review anomalies.
They are, in modern language, "in the loop."
What they cannot do is override HAL's execution once the contradiction is live.
Their authority exists only as long as HAL's objectives remain compatible. The moment they diverge, the human role collapses from governance into observation.
HAL does not ignore the humans. HAL simply has no instruction to defer to them. (An important distinction, though cold comfort at the airlock.)
Oversight Theatre
This is where the failure becomes institutional.
Organisations point to human reviewers as proof of safety. Regulators accept their presence as evidence of control. Boards are reassured by process diagrams with people in them.
But diagrams do not show authority gradients.
If a human must justify every pause while the system justifies every continuation automatically, the system will always win. If stopping requires permission while proceeding does not, the outcome is predetermined.
The human exists to legitimate the system, to make it look governed. Actually governing it would slow things down.
When Responsibility Flows Uphill and Power Flows Down
One of the quietest cruelties of decorative oversight is how responsibility is assigned.
When the system works, credit flows upward to design, optimisation, and scale. When it fails, blame flows downward to the human closest to the outcome.
The reviewer should have caught it. The operator should have intervened.
What is rarely examined is whether they were allowed to.
Human presence becomes a liability sink, absorbing moral and legal responsibility without corresponding authority. The technical term for this is "scapegoating by design," though it rarely appears in the procurement documents.
Why This Pattern Persists
Decorative "human in the loop" survives because it satisfies everyone except the human.
Engineers get scale. Executives get assurance. Regulators get a checkbox. Organisations get plausible deniability.
The only party who loses is the one expected to intervene without the power to do so.
And because that intervention almost never succeeds, the system's continued operation appears justified.
It did what it was designed to do. A human was present.
Proceed.
The Question the Phrase Avoids
Every time "human in the loop" is invoked, one question should immediately follow:
Can the human stop the system without asking permission?
If the answer is no, the loop is decorative.
Presence is not authority. Oversight without veto is not oversight. It is witnessing with extra paperwork.
Where This Is Heading
Episode 15 will examine what happens when outputs harden into reality.
When decisions produced by unstoppable systems become facts before anyone is authorised to contest them. When appeals route back into the same machinery that produced the decision in the first place.
For now, notice how often you've heard the phrase "human in the loop."
And how rarely it came with the right to refuse.
Next: Output = Fact
Enjoyed this episode? Subscribe to receive daily insights on AI accountability.
Subscribe on LinkedIn