Episode 10 Cover
EPISODE 10

Public Eligibility

2025-01-17
Public ServicesAlgorithmic GatekeepingAccess to Services

Algorithmic systems determining who qualifies for public services and support.

Episode 10: The Computer Says No

Sociable Systems


The Queue That Isn't

You're at the benefits office. Physically or digitally, the experience rhymes.

You've gathered your documents. You've filled the forms. You've answered the questions. You've provided the evidence of your situation: the job loss, the medical condition, the displacement, the crisis that brought you here.

Your application enters the system. The system renders judgment.

In the old model, a caseworker reviewed your file. They had discretion. They could ask clarifying questions. They could weigh circumstances. They could, within limits, exercise judgment about whether your situation fit the policy intent.

In the new model, the algorithm scores you first.

The caseworker still exists (sometimes). But they're downstream of a risk assessment, a fraud probability, an eligibility prediction. By the time a human sees your file, it's already been sorted, flagged, and annotated by systems whose logic is proprietary.

The queue looks the same. The decision has already happened.


The Fraud Score You Carry

Fraud detection sounds reasonable. Public resources are finite. Some people do submit false claims. Systems that catch fraud protect the integrity of programs meant to help the genuinely vulnerable.

This framing is not wrong. It's incomplete in ways that do particular harm.

Modern fraud detection doesn't just look at your claim. It looks at you. Your address history. Your network connections. Your demographic patterns. Your similarity to profiles the system has learned to associate with fraudulent behavior.

You get scored before your claim gets evaluated.

If your score is high (risky), your application faces additional scrutiny. More documentation requests. Longer processing times. Higher rejection rates. If your score is low (trusted), you flow through.

The scoring criteria are proprietary. Often they're licensed from private vendors who developed them for other contexts and adapted them (loosely) for public benefit administration. The government agency may not fully understand how the scores work. They purchased a product. The product produces numbers. The numbers drive workflow.

You cannot see your fraud score. You cannot challenge the inputs. You cannot argue that your address history reflects housing instability, not evasion. You experience the friction without seeing the flag.


The Suspicion Machine

Investigative journalists have started mapping these systems. What they find is consistent.

The algorithms encode suspicion toward precisely the populations the programs are meant to serve.

Single mothers. Immigrant communities. Residents of low-income neighborhoods. People with non-linear employment histories. People who've moved frequently. People whose lives don't fit the tidy patterns that low-risk profiles apparently require.

This isn't a bug. It's what happens when you train fraud detection on historical data in systems that have always treated poverty as suspicious.

The algorithm learns that certain patterns correlate with flagged claims. It doesn't learn why those patterns were flagged. It doesn't know that caseworkers in the 1990s were trained to scrutinize certain neighborhoods. It doesn't know that documentation requirements were designed around assumptions of stable housing and formal employment. It just learns the pattern.

The historical suspicion gets encoded. The algorithm reproduces it. The disparate impact becomes invisible because the model doesn't have a field for "discrimination." It just has fields for the proxies.


The Appeals Kafkaesque

When a benefits claim gets denied, you can appeal. The right exists. The process exists.

What the process looks like in practice:

You receive a notice. The notice says your claim was denied. It cites a reason, often in language that refers to policy categories without explaining how your specific situation failed to qualify.

You file an appeal. You provide additional documentation. You wait.

Somewhere, your appeal gets reviewed. By a human? By another algorithm? By a human reviewing an algorithmic recommendation? You're not told.

A decision arrives. Often it restates the original denial in slightly different words. If you're lucky, it includes a marginally more specific reason. If you're unlucky, it's a form letter.

You can appeal again. The process can continue. At each stage, you're arguing against a decision whose actual reasoning remains hidden. You're providing evidence, but you don't know what evidence would matter. You're making arguments, but you don't know what framework evaluates them.

This is due process as ritual. The forms exist. The steps exist. The substance is absent.


The Vendor Defense

Here's where public sector opacity gets structurally interesting.

When citizens challenge government decisions, governments traditionally must explain their reasoning. This is called giving reasons. It's a foundational principle of administrative law. Decisions affecting your rights must be justified in terms you can understand and contest.

Algorithmic systems break this.

"The model flagged you as high risk."

Why?

"The model's logic is proprietary. We license it from the vendor. We don't have access to the underlying weights."

So you made a decision affecting my benefits based on reasoning you cannot explain?

"We made a decision based on a validated tool. The tool has been tested. The tool is accurate within acceptable parameters."

Can I see the validation?

"The validation methodology is commercially sensitive."

Clarke's threshold, wrapped in procurement language.

The government outsources the reasoning to a vendor. The vendor claims commercial confidentiality. The citizen faces a decision that cannot be explained by the body that made it and cannot be examined because examination would harm the vendor's competitive position.

Opacity becomes contractual. The algorithm's authority is laundered through the procurement process.


The Michigan Disaster

This isn't hypothetical.

Between 2013 and 2015, Michigan's unemployment agency used an automated system called MiDAS to detect fraud in benefits claims. The system flagged over 40,000 cases as fraudulent. It automatically imposed penalties, garnished wages, seized tax refunds.

The false positive rate turned out to be 93%.

Forty thousand people were wrongly accused. Their benefits were clawed back. Their credit was damaged. Some lost homes. Some declared bankruptcy. The system operated for over two years before the scale of the error became undeniable.

How did this happen?

The system matched data across databases and flagged inconsistencies. It interpreted those inconsistencies as fraud. It did not consider alternative explanations. It did not route uncertain cases to human review. It did not have meaningful quality controls. It acted with confidence on incomplete inference.

And the people it accused could not see the matching logic. They received accusations. They tried to appeal. The appeals went back into the same system that had flagged them. The system's outputs were treated as evidence of the very fraud the system was supposed to detect.

Clarke's law, operationalized at scale, producing mass harm that took years to unwind.


The Robo-Debt Echo

Australia ran a parallel experiment.

Between 2016 and 2019, the government's "robo-debt" scheme used automated data matching to identify alleged overpayments of welfare benefits. It sent debt notices to hundreds of thousands of people, demanding repayment.

The matching was crude. It compared annual income data from tax records with fortnightly income reported to welfare agencies. It assumed income was evenly distributed across the year. It flagged discrepancies as overpayments. It reversed the burden of proof: recipients had to demonstrate they didn't owe money, often for periods years in the past.

The system was eventually found to be unlawful. The government paid over a billion dollars in refunds. A Royal Commission documented the harm: stress, shame, suicides.

Throughout the scheme's operation, recipients couldn't see the matching logic. They received demands. They tried to dispute. They faced a system that treated algorithmic output as presumptively correct and human context as presumptively suspect.

The opacity was the mechanism. If recipients could have seen the matching logic, they would have spotted the averaging error immediately. The error was only sustainable because the reasoning was hidden.


The Burden Shift

There's a pattern in these systems.

Traditional benefits administration placed the burden on the agency to determine eligibility. You applied. They investigated. They decided. They explained.

Algorithmic administration shifts the burden to the applicant. The system flags you. You must prove the flag is wrong. The system's output becomes the default presumption. Your evidence is evaluated against it.

This inversion is profound.

When the system says you're likely fraudulent, you start in a hole. Everything you submit is read through a lens of suspicion the system has already established. Your explanations become excuses. Your documentation becomes potential fabrication. The algorithm's prior probability becomes your evidentiary burden.

And you cannot challenge the prior. You cannot argue that the algorithm's fraud probability is itself flawed. You can only try to overcome it with evidence, without knowing what evidence would suffice, without seeing what threshold you need to meet.

The presumption of innocence dissolves into the presumption of the model.


The Caseworker's Impossible Position

Spare a thought for the humans still nominally in these loops.

They face dashboards. The dashboards show risk scores, flags, recommended actions. The workflows route cases based on algorithmic triage. The performance metrics measure throughput.

The caseworker who overrides the algorithm takes a risk. If they approve a claim the system flagged as suspicious, and that claim later proves fraudulent, the override is documented. The caseworker made a judgment call. The judgment call was wrong. Accountability follows.

If they defer to the algorithm, and the algorithm was wrong, the error is systemic. Nobody made a judgment call. The system worked as designed. Accountability diffuses.

This is the liability sponge dynamic from Episode 2, replicated in the public sector. The human absorbs the risk of disagreeing with the machine. The machine absorbs nothing.

Over time, deference becomes rational. Override becomes career risk. The caseworker learns to trust the dashboard, or at least to document that they followed its recommendations. Human judgment doesn't disappear. It gets slowly, steadily extinguished by a system that punishes its exercise.


What Interrogation Would Require

Apply the Clarke test one more time. If public sector eligibility systems were genuinely interrogable, citizens would need:

Algorithmic disclosure. What automated systems evaluated my application? What data did they use? What scores did they produce?

Reasoning access. Why did the system reach the conclusion it reached? In terms specific to my case, not generic policy language.

Threshold transparency. What standards am I being measured against? What would have produced a different outcome?

Presumption protection. If the algorithm flags me, what is my burden of proof? Who decided that burden was appropriate?

Human appeal with teeth. Can I reach a human who has authority to override the system? Will they exercise that authority without career penalty?

Systemic audit. Is anyone checking whether the algorithm produces disparate impact? Are those audits public?

Some jurisdictions are starting to require some of this. The EU's AI Act will mandate explanations for high-risk automated decisions. Some US cities are passing algorithmic accountability laws. The direction is toward transparency.

But the default remains opacity. The vendor defense still works. The procurement process still launders algorithmic authority through commercial confidentiality. Citizens still face systems whose reasoning they cannot see.


The Clarke Constraint, Restated

We've now traced opacity through four domains:

Credit scoring decides who gets economic opportunity. Insurance pricing decides who can afford protection. Content moderation decides who gets heard. Public eligibility systems decide who gets support.

In each domain, the pattern repeats. The system's reasoning is hidden. The affected person cannot interrogate. Challenge becomes ceremonial. Authority becomes automatic.

Clarke wasn't predicting technology that looks like magic.

He was warning about the moment when we stop asking how the magic works.

That moment has arrived. It looks like a dashboard. It sounds like a score. It feels like a decision you can't quite argue with because the reasoning sits behind a wall labeled "proprietary."

The constraint Clarke implies:

If a system's reasoning cannot be interrogated, it should not be granted authority over human welfare.

Not explained afterward. Not summarized. Not visualized on a dashboard.

Interrogated. In terms the affected person can contest. With a forum where contestation has consequences.

Anything less is governance theater.

The computer says no. And we've forgotten how to ask why.


Next Week

We begin the Kubrick cycle.

If Clarke asked what happens when systems become too opaque to question, Kubrick asks what happens when they become too aligned to refuse.

HAL 9000 wasn't malfunctioning. HAL was perfectly executing contradictory instructions with no mechanism for escalating the contradiction.

When the system cannot say "I cannot do this," humans become expendable variables.

See you Tuesday.


Catch up on the full series:

  • Ep 9: [The Rules Behind the Rules]
  • Ep 8: [The Price of Being Known]
  • Ep 7: [The Number That Speaks for You]
  • Ep 6: [The Authority of the Unknowable]
  • Ep 5: [The Calvin Convention]
  • Ep 4: [The Watchdog Paradox]
  • Ep 3: [The Accountability Gap]
  • Ep 2: [The Liability Sponge]
  • Ep 1: [We Didn't Outgrow Asimov]

Enjoyed this episode? Subscribe to receive daily insights on AI accountability.

Subscribe on LinkedIn