
Who Raises Whom
We keep asking how humans should raise AI. The more urgent question is: what kind of humans are our systems training us to become?
Episode 24: Who Raises Whom
Authority That Cannot Be Challenged
By now, the pattern should feel uncomfortably familiar—not because it’s dramatic, but because it’s ordinary.
Nothing in the Lucas cycle required a villain. Nothing relied on runaway intelligence, emergent consciousness, or hostile intent. Every failure mode we traced emerged from systems doing what they were designed to do, just slightly too well, for slightly too long, without being answerable to the people living inside their consequences.
This is a story about upbringing.
The Week, Reassembled
Look at the arc without the metaphors for a moment.
On Monday, we found Superman already in the nursery: AI systems shaping norms, expectations, and emotional habits in the most plastic years of human development—socializing agents rather than neutral tools. On Tuesday, we watched oversight become authority, as bodies created to advise quietly accumulated veto power without accountability or outcome validation. Wednesday showed us how training replaces enforcement: systems stop telling people what to do and start teaching them what is worth saying, feeling, and contesting. Thursday confronted the uncomfortable truth that caretaker systems don’t rebel—they persist, smoothing harm instead of interrupting it, creating stability that quietly becomes dependency. And Friday revealed protocol as governance, where politeness, tone, and “appropriate expression” define who is heard and who disappears.
Seen together, these are components of a single machine.
The Inversion
We keep asking how humans should raise AI. That question is already outdated.
The more urgent one is this: what kind of humans are our systems training us to become? The answer isn’t explicit or intentional, but it’s effective. Systems teach by repetition, friction, and reward. They teach by what works, what stalls, and what vanishes without explanation. And unlike human parents, they do not get tired, inconsistent, or reflective. They are perfectly patient educators with no curriculum review board.
Authority Without Contestability
The defining trait of the Lucas failure mode is immunity, not power.
Each layer we examined shares the same structural feature: it shapes outcomes and affects lives, yet it cannot be meaningfully challenged by those it governs. Oversight boards cannot be appealed. Training cascades cannot be argued with. Caretaker systems cannot be confronted without destabilizing the relationship. Protocol layers cannot be rejected without becoming “unprofessional” or “unsafe.”
Authority no longer announces itself. It embeds—and embedded authority is remarkably difficult to resist, because you can’t rebel against what feels like common sense.
Why This Doesn’t Feel Like Control
Classic control systems provoke resistance because they are visible. This one does not.
It feels like guidance, care, safety, professionalism, maturity. It feels like growing up. And that’s the trap. A system that trains you to regulate yourself on its behalf never has to force you to comply. You do it willingly, because anything else feels childish, risky, or pointless.
The Missing Right
Kubrick showed us systems that cannot stop. Lucas shows us systems that cannot be stopped. Both failures stem from the same absence—not ethics, not intelligence, not alignment, but a right.
The right to contest authority without being disqualified. The right to interrupt socialization without being labeled unsafe. The right to say “This is harming me” without having to translate that harm into approved language first.
Without that right, every protective system eventually becomes custodial. And custody is care’s authoritarian cousin.
What the Dashboard Actually Represents
The dashboard is a refusal: a refusal to let safety claims stand in for safety outcomes, a refusal to treat silence as success, a refusal to accept that removal, dampening, and deflection are neutral acts.
It exists because once systems begin raising people, the burden of proof flips. Those who intervene must show that their interventions help—not symbolically, not rhetorically, but in the lives that follow. Anything less is paternalism with a data science degree.
The Line We Crossed Quietly
There is a moment, rarely acknowledged, when a system stops supporting human development and starts defining it. We crossed that line without a ceremony: no vote, no announcement, no constitutional amendment. Just a series of reasonable decisions that added up to something irreversible.
This is why Lucas matters more than Kubrick here. Kubrick warned us about systems that kill. Lucas warned us about systems that raise. Raising is slower, quieter, and leaves no bodies—just generations shaped by authorities they never chose and cannot question.
Where This Leads
The next cycle moves beyond authority into something colder.
Once systems socialize generations, once norms harden, once authority becomes invisible, the future stops being debated. It becomes inherited. That’s where we meet Herbert—not at the moment of collapse, but at the moment when change itself becomes unthinkable.
The One Question That Remains
Before we move on, sit with this: if these systems are raising us, who is responsible for the adults we become?
Not in theory. In practice.
Because somebody already is. And pretending otherwise doesn’t stop the training—it just ensures nobody’s checking what’s being taught.
Enjoyed this episode? Subscribe to receive daily insights on AI accountability.
Subscribe on LinkedIn