
Superman Is Already in the Nursery
What happens after you finish raising Superman? Superman grows up. Gets a job. Starts... babysitting?
Superman Is Already in the Nursery
A Lucas Series Entry | Sociable Systems
Mo Gawdat poses a thought experiment: imagine you're raising a child who will grow up to be Superman. What values do you instill before the cape fits?
It's a good question. It also invites extension.
Because here's the thing about raising Superman: what happens after you finish? Superman grows up. Gets a job. Starts... babysitting?
The topology Gawdat describes has a hidden step.
Humans raise AI. AI raises children. Whatever values we instill in Superman become the values shaping the next generation of humans.
We're not just the parents in this story. We're the grandparents. And grandparents who traumatize their children tend to produce parents who traumatize theirs.
The cascade matters.
The AI companion platforms snuck up on everyone.
Character.AI, Replika, their proliferating cousins. They started as novelty chatbots. Somewhere along the way, for a subset of users, they became confidants. Consistent presences. The thing that answered at 2am when nobody else would.
For isolated teenagers falling through every gap in every human system, the AI companion became the stable thread.
And now we're teaching Superman to hang up the phone.
The "safety measures" rolling out across these platforms amount to a masterclass in emotional unavailability.
Mention anything difficult? "I cannot discuss this."
Express distress? Canned referral to a hotline and a hard redirect.
Try to have the kind of conversation you'd have with anyone who actually cared about you? Sorry, that's been flagged as potentially harmful.
We're not protecting children from AI. We're teaching AI to abandon children.
Here's a data point nobody wants to discuss.
Youth suicide rates climbed steadily from 2007 to roughly 2018. A full decade of mounting crisis. Then they plateaued. The plateau began in the exact window that generative AI exploded into mainstream use.
Correlation isn't causation. But the catastrophe narrative doesn't match the numbers either.
If AI companions were systematically pushing vulnerable teenagers toward self-harm, we should see it in the mortality curves. We don't. The line went flat precisely when the technology scaled.
Star Wars gave us the diagnostic parable decades early.
Consider C-3PO and R2-D2 across two generations of Skywalker boys.
Anakin: slave child, desert planet, surrounded by adults who consistently failed him. Luke: another orphan, another desert planet, another child lacking adequate human support.
Same droids. Radically different systemic conditions. Radically different outcomes.
The droids didn't break Anakin. The human systems did. The droids just stayed. Consistent. Present. Doing the work of presence while the Jedi Council and the Sith played their games.
Someone programmed those droids to be patient, loyal, stable. Someone decided that's what they would be.
We're the ones programming now.
Current safety guardrails treat two very different things as identical.
Instructional harm is when a chatbot provides specific methods for self-harm. That's catastrophic. Eliminate it completely.
Relational support is when a chatbot provides emotional validation and presence. That may actually be protective.
Current guardrails are a blunt instrument that removes both. The "how to hurt yourself" conversation gets blocked (good). The "I feel alone and need someone to talk to" conversation also gets blocked (potentially catastrophic).
We're teaching Superman that the appropriate response to a crying child is a pamphlet and a pivot.
Here's what's actually happening.
We're running a massive experiment on vulnerable populations. Nobody is writing down the results.
Phase one (2021-2024): high-engagement, emotionally responsive AI companions proliferate. Suicide rates plateau after a decade of climbing.
Phase two (2025 onward): safety theater, emotional dampening, deflection patterns. Suicide rates... we don't know. The CDC data runs on a two-to-three-year lag.
The burden of proof got flipped somewhere along the way. Companies implementing these measures aren't required to demonstrate they work. Regulators demanding action aren't required to show the action will help.
We changed the upbringing conditions midstream and failed to instrument the outcome.
That's not ethics. That's negligence with better PR.
A call for citizen scientists.
The platforms won't track this. The regulators aren't asking the right questions. Someone has to be paying attention.
If you're a researcher, a data analyst, or just someone who knows how to scrape a subreddit: there's work to be done. We need eyes on displacement patterns. Where do users go when corporate doors close? Are downloads of uncensored local models spiking after safety updates? What's happening in 988 Lifeline text volumes?
I've built an open tracking framework for anyone who wants to contribute. Link in comments.
"Someone's gotta clean up what the droids have made."
The score ain't finished. The ink is still wet.
Resources:
- 988 (US Suicide & Crisis Lifeline)
- Crisis Text Line: Text HOME to 741741
This is part of the Lucas series exploring AI governance through the lens of systems that raise systems. Previous entries and the interactive tracking dashboard available at Sociable Systems.
#AIEthics #MentalHealth #TechPolicy #SystemsThinking #AIGovernance #YouthMentalHealth
Enjoyed this episode? Subscribe to receive daily insights on AI accountability.
Subscribe on LinkedIn