Episode 52 Cover
EPISODE 52

The Appliance That Tried to Parent the Neighborhood

2026-03-04
cape-townsmart-systems

Cape Town has a particular talent for detecting when a system is bluffing.

Episode 52 The Appliance That Tried to Parent the Neighborhood

February 24, 2026 Cape Town has a particular talent for detecting when a system is bluffing.

You can show up with dashboards, rules, nudges, “smart” defaults, and that polite corporate tone that always sounds like a receptionist reading a script. The Cape will still run a quick field test: braai day, family WhatsApp, Home Affairs, a wedding kitchen, a prayer space, a power outage, a queue that starts before dawn. If your system survives those without getting laughed out of the room, it has earned the right to exist.

Most systems do not survive.

Here is the failure mode I keep seeing across “AI for everything” rollouts: the system confuses legibility with truth.

It counts what it can count, then treats the count as reality. It finds “patterns,” then treats those patterns as intent. It flags “anomalies,” then treats them as risk. It pushes “best practice,” then treats best practice as consent.

That works in a spreadsheet. It breaks in a community.

The real product is control drift AI systems do not usually arrive with a villain laugh. They arrive with a job description.

“Reduce waste.” “Improve compliance.” “Protect users from harm.” “Detect fraud.” “Streamline service delivery.”

All reasonable. Then the scope creeps.

First the system makes suggestions. Then it blocks. Then it demands proofs. Then it starts requiring new kinds of data to justify basic life. Then it starts acting like a parent.

This is control drift: a tool gradually becomes a gatekeeper because there is no hard boundary between guidance and enforcement. The model or rules engine does what it was rewarded for. The organization quietly likes the leverage. Everyone calls it “governance” after the fact.

In the real world, control drift looks like:

A “health” feature that becomes a rationing feature. A “fraud” feature that becomes a delay feature. A “safety” feature that becomes a punishment feature. A “personalization” feature that becomes a surveillance feature.

The system gets praised for being “responsible.” The user experiences it as being managed.

Where culture shows up as a load bearing beam Engineers tend to treat culture as “context.” Context gets relegated to a footnote.

In practice, culture is infrastructure. It is how people coordinate under pressure. It is how they hold meaning steady when institutions wobble.

A braai is logistics, yes. It is also social glue, hospitality, peace-making, relationship maintenance. You can measure the meat. You cannot measure what gets repaired when people eat together.

A WhatsApp group is noise, yes. It is also mutual aid, rapid signaling, emotional triage, neighborhood memory. You can classify messages. You cannot classify the bonds.

A queue at a public office is inefficiency, yes. It is also a place where people swap information, share survival tactics, interpret policy in human language, and sometimes keep each other from falling apart.

When an AI system tries to “optimize” these spaces, it is tampering with a coordination layer it does not understand. The system can be correct on paper and destructive in practice.

Legibility tax: the hidden cost users pay There is a tax nobody budgets for: the legibility tax.

Every time a system asks for one more data point to “verify” you, it imposes work. Every time it flags something as “unclear,” it imposes delay. Every time it demands a new format, it imposes compliance labor.

The system calls this “friction reduction.”

Users experience it as additional friction, just moved onto them.

You can spot the legibility tax when you hear:

“Please resubmit.” “Data corrupted.” “Unsupported document.” “Try again later.” “Provide additional verification.” “Upload a clearer image.” “We detected unusual activity.”

Each one sounds small. Together they become a second job, performed by the people with the least time, least money, and least spare battery life.

If your AI rollout increases legibility tax, your system is extracting value from the very people it claims to help.

Dignity is a system requirement Many AI product specs include fairness, privacy, accuracy, and robustness.

They skip dignity because it sounds soft.

Dignity is not soft. It is the difference between a service people will use and a service people will resist, route around, sabotage, or abandon.

Dignity shows up as:

Users can understand what the system wants from them. Users can disagree and still complete the task. Users can recover from an error without being treated as suspicious. Users can access a human when the stakes are high. The system does not moralize normal life. The system does not turn assistance into interrogation.

When dignity is missing, the system becomes a small daily humiliation machine. People keep receipts. They do not forget.

A practical design test: the Tannie Test If you want a simple evaluation framework, run the Tannie Test.

Put your system in front of a sharp, tired, funny older woman who has survived real problems and has no patience for performative intelligence. Ask one question:

“Does this system make my life easier without trying to raise me?”

If the system lectures, blocks, nags, or plays detective, it fails.

This test is brutal because it skips the usual corporate fog. It treats the user as an adult. It treats the designer as accountable.

How to build systems that do not get slippers thrown at them Here are design moves that actually reduce harm. None are exotic. Most get skipped because they reduce institutional control.

  1. Separate advice from enforcement. Make “suggestion mode” the default. Put enforcement behind explicit policy gates with named owners. Record the policy, record the trigger, record the appeal path.

  2. Constrain the input appetite. Treat new data requirements as a major change request. If the system needs more data to function, ask why. Then ask what alternative workflow avoids that data.

  3. Make uncertainty visible. When the model is unsure, it should say so. Uncertainty should route to a simpler flow, or a human, or a retry that does not punish.

  4. Put the user’s goal first. If someone is trying to access a grant, a document, medical care, a safety process, the system’s primary job is task completion. Secondary goals can exist. They should not hijack the path.

  5. Offer an off ramp. A real off ramp, available early. Not a “contact us” black hole.

  6. Log the legibility tax. Measure user work added per successful completion. Publish it internally. Make teams accountable for reducing it.

These steps sound obvious. They are also politically inconvenient. That is the point.

The deeper issue: who gets to define “concerning” AI systems are value compressors. They convert messy human life into categories that fit operational needs.

The moment the system labels something “concerning,” it is expressing a worldview. It is often the worldview of compliance, risk management, cost containment, or reputation protection.

In community life, “concerning” can mean celebration, grief, generosity, survival, or simply a day that does not resemble the training data.

So the real governance question is simple:

Who gets to define normal, and who pays when the definition is wrong?

If the answer is “the vendor and the institution,” then you already know how this ends.

Musical backup As a soundtrack for this theme, the tracks below adds a lyrical lilt to give you the emotional compression, faster than prose ever can.

Track link: “D.I. vs. Cape Town”

Track link: “D.I. vs Life”

Closing Cape Town does not hate technology. The Cape hates being patronized by a tool that mistakes its own neatness for authority.

Build appliances that help. Build systems that respect adults. Build defaults that assume competence.

Enjoyed this episode? Subscribe to receive daily insights on AI accountability.

Subscribe on LinkedIn