Sociable Systems

TRACKING FRAMEWORK v1.0

THE EXPERIMENT NOBODY AUTHORIZED

We changed the upbringing conditions midstream and failed to instrument the outcome.

That's not ethics. That's negligence with better PR.

Youth suicide rates in the U.S. climbed steadily for over a decade (2007-2018). Then they plateaued. The plateau began in the exact window that generative AI exploded. The narrative blaming AI chatbots for youth suicide is, well, not supported by the data.

THE QUESTION

What if removing these digital companions is the real danger for isolated kids?

THE RISK

Current guardrails are a blunt instrument removing both harmful AND protective elements.

THE PROJECT

Track the outcomes nobody else is measuring. Build the accountability mechanism.

The Contrarian Position

This is a high-stakes, deliberately contrarian argument. The data observation that youth suicide stabilized during the exact rise of generative AI suggests that for some kids, these machines aren't monsters. They're life rafts.

Because this goes against the "tech is bad" zeitgeist, the rigor has to be absolute. No holes in the data. No emotional language in the methodology. The provocation is there. The execution must match it.