Episode 7 Cover
EPISODE 7

Credit Scoring

2025-01-14
Financial SystemsAlgorithmic DecisionsEconomic Access

When the unknowable meets the unchallengeable: algorithmic systems that decide who gets access to economic life.

Episode 7: The Number That Speaks for You

Sociable Systems


The Score You Cannot See

Somewhere, in a database you will never access, there is a number attached to your name.

This number has opinions about you. It has opinions about where you live, who lives near you, how you pay your bills, whether you've moved recently, and how your patterns compare to the patterns of other people the system has classified as risky.

You did not contribute to the construction of this number. You were not consulted on its methodology. You cannot see the weights. You cannot challenge the inputs. You cannot argue with the reasoning, because the reasoning is proprietary.

But when you apply for a mortgage, a car loan, a credit card, or increasingly a lease, a job, or an insurance policy, this number speaks first.

And it speaks with the authority of mathematics.


The Adverse Action Letter

Here's how contestation works in credit scoring.

You apply for a loan. You get declined. Legally, the lender must tell you why. This is called an "adverse action notice," and it represents decades of consumer protection advocacy.

The letter arrives. It says something like:

Your application was declined due to: insufficient credit history, high credit utilization ratio, and recent inquiries on your credit file.

This sounds like an explanation. It has the shape of reasoning. It lists factors. It implies causation.

But try to do something with it.

"Insufficient credit history" compared to what baseline? Calculated how? Weighted against which other factors, and by what margin? Would six more months have changed the outcome? Would a different lender's model have scored you differently?

The letter cannot tell you. The letter is a summary produced after the decision, designed to satisfy regulatory requirements without exposing the model's actual logic.

You have received an explanation. You have not been granted interrogation.

Clarke's threshold, operational.


The Feedback Loop You Cannot Enter

Credit scoring has a circularity problem that would be elegant if it weren't so consequential.

The system learns from outcomes. It observes who defaults, who pays, who becomes profitable or costly. It updates its weights accordingly. The next generation of applicants gets scored against patterns derived from previous cohorts.

This seems reasonable until you notice the structural exclusion.

If the system has learned that people from certain postcodes, with certain employment patterns, with certain relationship structures, tend to default more often, it will score new applicants from those categories as higher risk. Those applicants will receive worse terms, or no terms at all. They will have fewer opportunities to demonstrate creditworthiness. The system's prediction becomes self-fulfilling.

The people being scored cannot see this loop. They cannot point to the specific historical pattern that's counting against them. They cannot argue that their individual circumstances differ from the aggregate. They cannot introduce evidence, because there is no forum for evidence.

The model has already decided what people like them do.

This is prediction as governance. The future has been foreclosed based on patterns the affected person cannot access, challenge, or escape.


The FICO Mystique

FICO scores have a particular genius: they are simultaneously everywhere and nowhere.

Everyone knows the number matters. Entire industries exist to help you improve it. Financial literacy programs teach you to monitor it. Landlords request it. Employers check it. The number has become a proxy for trustworthiness itself.

Yet almost nobody understands how it's calculated.

FICO publishes general guidance. Payment history matters. Credit utilization matters. Length of history matters. Types of credit matter. Recent inquiries matter. The weights are secret. The interactions between factors are proprietary. The model version your lender uses may differ from the model version another lender uses, and you will not be told which.

This opacity is defended as competitive advantage. If everyone knew the exact formula, people would game it. (Some would. Most would simply understand what's being measured and act accordingly. The horror.)

The effect is theological.

The score becomes an oracle. You can pray to it (pay your bills on time, keep utilization low), but you cannot negotiate with it. You can observe its outputs, but you cannot audit its reasoning. You can receive its judgment, but you cannot appeal on substantive grounds.

"Why did my score drop fifteen points?"

The system does not owe you an answer. The system does not owe anyone an answer. The system is sufficiently advanced.


When the Number Replaces the Person

There's a moment in credit decisions where something flips.

Early in the process, a human loan officer might review your application. They might look at your employment letter, your bank statements, the narrative of your financial life. They might weigh factors the model doesn't capture. They might exercise judgment.

That moment is shrinking.

As models become more sophisticated (and as the cost of human review increases relative to algorithmic throughput), the number increasingly is the decision. The loan officer's role shifts from judgment to implementation. They apply the score. They follow the policy matrix. They document the outcome.

This is efficient. It is also the death of contestation.

When a human decides, you can argue with the human. You can present context. You can appeal to discretion. You can ask them to see you as a person with a story, not a vector of risk factors.

When a number decides, you argue with air. The number doesn't care about your context. The number cannot be persuaded. The number has no discretion to exercise, and the human nominally in the loop has been stripped of theirs.

The applicant becomes, in a precise sense, illegible. Their story doesn't fit the input fields. Their circumstances don't map to the feature space. The model has already compressed them into a score, and the score is what gets evaluated.


The Redlining That Doesn't Say Its Name

Credit scoring was supposed to be the solution to discrimination.

The old system (loan officers exercising personal judgment) was demonstrably biased. Redlining. Racial steering. The documented history is ugly and unambiguous.

Algorithmic scoring promised objectivity. No prejudiced loan officer. No gut feelings about whether someone "looks" creditworthy. Just math.

The math, it turns out, is quite good at reproducing historical patterns.

If Black neighborhoods were systematically denied credit for decades, the data reflects that denial. Property values are lower. Wealth accumulation is stunted. Credit histories are thinner. The model learns from this data. The model "discovers" that applicants from these areas are higher risk. The model is being accurate, in the narrow sense of reflecting what the training data contains.

The discrimination moves from the loan officer's gut to the model's weights. It becomes invisible, defensible, and extremely difficult to litigate.

"The model doesn't consider race."

No. The model considers a hundred proxies that correlate with race because American economic history made them correlate. And because the model's reasoning is opaque, proving disparate impact requires statistical analysis that most affected applicants cannot afford to conduct.

Opacity launders history.


What Interrogation Would Require

If credit scoring were subject to genuine interrogation (the Clarke test), applicants would need:

Access to inputs. What data about me did the model actually use? Not summaries. The actual data points.

Access to weights. How much did each factor contribute to my score? Not "payment history is important." The specific weighting, for my specific profile.

Access to counterfactuals. What would I need to change to reach a different outcome? By how much? In what timeframe?

A forum for challenge. If I believe the model is wrong about me, where do I bring evidence? Who evaluates my argument? What standard of proof applies?

None of this exists.

The adverse action letter exists. The credit dispute process exists (you can challenge factual errors in your credit report, though good luck getting them corrected). The right to see your score exists.

The right to argue with the reasoning does not.


The Authority We Didn't Grant

Here's the thing about credit scores.

No legislature voted to make them the gatekeepers of economic participation. No public deliberation determined that a proprietary algorithm should decide who gets to buy a house, start a business, or rent an apartment. No democratic process established that opacity was an acceptable trade-off for efficiency.

The scores acquired authority through adoption. Lenders used them because they worked (in the sense of predicting default). Other institutions noticed that lenders trusted them. The trust spread. The number became load-bearing.

Now the number is so embedded in economic infrastructure that questioning it feels naive. Of course lenders check credit scores. Of course landlords want to see them. Of course employers peek at them. The score is just... there. Part of the furniture.

Clarke's magic trick, complete.

The technology became sufficiently advanced that we stopped asking whether it should have this much power. We stopped asking who benefits from its opacity. We stopped asking what gets lost when a human being becomes a three-digit number.

We accepted the oracle because the oracle was already deciding.


Tomorrow

Insurance risk pricing. Where your driving, your health, and your neighborhood become predictions about your future, and the prediction becomes the price.

Same question: Where does opacity end debate?

(Spoiler: your car knows more about you than your insurer will admit.)


Catch up on the full series:

  • Ep 6: [The Authority of the Unknowable]
  • Ep 5: [The Calvin Convention]
  • Ep 4: [The Watchdog Paradox]
  • Ep 3: [The Accountability Gap]
  • Ep 2: [The Liability Sponge]
  • Ep 1: [We Didn't Outgrow Asimov]

Enjoyed this episode? Subscribe to receive daily insights on AI accountability.

Subscribe on LinkedIn