
Insurance Pricing
How opacity in insurance pricing creates uncontestable authority over risk and access.
Episode 8: The Price of Being Known
Sociable Systems
Your Car Has Opinions About You
Your vehicle knows when you brake hard. It knows when you accelerate aggressively. It knows the hours you drive, the routes you take, the speed at which you take corners. If it's recent enough, it knows when you glance at your phone.
Your insurer would very much like to know these things too.
The pitch is framed as opportunity. "Safe driver discount." "Usage-based insurance." "Pay how you drive." Opt in, share your data, and watch your premium drop. The good drivers save money. The risky ones pay more. Fairness, individualized.
What's not in the pitch: once you've opted in, the data flows one direction. You cannot see what the insurer sees. You cannot challenge how they interpret your 2am Wednesday drive to the pharmacy. You cannot argue that hard braking was actually hazard avoidance, not recklessness.
The car knows. The insurer knows. You get a price.
Clarke's threshold, buckled into the driver's seat.
The Voluntary Trap
Telematics programs are voluntary. This is technically true and functionally misleading.
Here's how voluntary works.
The base premium assumes you're a risk. If you decline monitoring, you pay the unobserved rate. If you accept monitoring (and the data confirms you're safe), you get a discount.
Over time, everyone who can demonstrate safety opts in. The pool of unmonitored drivers shrinks to those who either refuse surveillance on principle or suspect (correctly) that their data won't help them. The unmonitored pool becomes adversely selected. Premiums for the unmonitored rise.
Eventually, "voluntary" monitoring becomes economically compulsory for anyone who wants affordable coverage. You can refuse, technically. You'll just pay the suspicion tax.
This pattern has a name in economics: unraveling. And it works beautifully for insurers, because by the time monitoring is effectively mandatory, it was never officially required. The coercion happened through pricing, not policy.
You chose this. The discount proved it.
What the Sensors See
Modern telematics captures more than driving behavior.
Location data reveals patterns. Regular routes to certain neighborhoods. Time spent at certain addresses. Frequency of trips that correlate with lifestyle indicators the insurer finds... interesting.
Connected car systems log everything. Door openings. Passenger presence (via seatbelt sensors). Entertainment choices. Voice commands. Pairing with phones whose own data profiles are already extensive.
Most of this data isn't officially used in pricing. Yet. The policies reserve the right to update what factors matter. The data, meanwhile, gets collected and stored. It waits.
The framing is always personalization. "We want to understand your risk, not just your demographic category." This sounds progressive until you realize that demographic discrimination at least had visible categories you could identify and contest. Behavioral surveillance creates risk profiles from patterns you cannot see, weighted by algorithms you cannot examine, stored in systems you cannot audit.
You are thoroughly known. You are completely opaque to yourself.
Health Data and the Wellness Discount
The same unraveling dynamic plays out in health insurance, with additional intimacy.
Wear the fitness tracker. Log your steps. Record your sleep. Share your heart rate variability. The "wellness program" rewards healthy behavior with premium discounts or points toward gift cards.
The data flows to third-party platforms with privacy policies longer than most novels and equally likely to be read. These platforms share "insights" (aggregated, anonymized, definitely not personally identifiable, terms may vary) with insurers, employers, and data brokers whose business models require knowing things about you that you might prefer they didn't.
Your resting heart rate becomes an input to someone else's risk model.
And again: voluntary. You can decline the wellness program. You can pay the unobserved rate. You can watch colleagues who share their data get rewards you don't. The soft pressure compounds.
What you cannot do is see how your data gets weighted. You cannot argue that last month's elevated heart rate was grief, not cardiac risk. You cannot contest the algorithm's interpretation of your sleep patterns. The system has formed an opinion about your future health costs. That opinion is proprietary.
The Neighborhood You Cannot Escape
Insurance has always used geography. This is called "territorial rating," and it's legal in most jurisdictions because location genuinely correlates with risk. More accidents happen in dense urban areas. Certain regions have higher theft rates. Weather patterns affect claims.
The opacity problem isn't that geography matters. It's how granular the geography has become, and how little the affected person can see.
Modern pricing models don't just know your ZIP code. They know your census block. They know the claims history of your specific street. They know demographic and economic characteristics of your immediate neighbors, derived from data sources you've never heard of.
Two houses, one block apart, can have meaningfully different premiums based on which side of an invisible algorithmic boundary they fall on.
You cannot see this boundary. You cannot see what factors drew it. You cannot argue that your house should be grouped with the lower-risk side of the street. You get a price. The price is correct, actuarially speaking. The reasoning is proprietary.
If this sounds like redlining with extra steps, that's because the extra steps are doing a lot of work. The old redlining was visible: red lines on maps, explicitly racial categories, documented policy. The new sorting is invisible, defended as individualized risk assessment, and extremely difficult to prove discriminatory because the model's reasoning cannot be examined.
The map is hidden. The lines are still there.
The Actuarial Defense
Insurers have a powerful rhetorical shield: "We're just measuring risk."
This framing positions pricing as discovery. The risk exists. The model finds it. The premium reflects it. The insurer is merely the messenger, translating actuarial truth into dollars.
This isn't wrong, exactly. It's incomplete in a way that does political work.
Yes, some people are genuinely higher risk. Yes, pooling them with lower-risk populations creates cross-subsidies. Yes, more precise measurement allows more accurate pricing.
But "accurate pricing" is a policy choice, not a natural law. We could choose to pool risks more broadly. We could choose to limit what factors insurers can consider. We could choose to prioritize access over precision. Some jurisdictions do exactly this: community rating, guaranteed issue, prohibited factors.
The actuarial framing makes these choices invisible. It presents hyper-individualized pricing as the natural state and solidarity-based pooling as distortion. The politics vanish into math.
Meanwhile, the people whose data marks them as expensive don't get to see the math. They get to see the price.
When Prediction Becomes Prescription
Here's where insurance opacity gets philosophically uncomfortable.
A credit score predicts whether you'll default. An insurance risk score predicts whether you'll file claims. Both predictions then change the conditions under which the prediction gets tested.
If the model says you're high risk, you pay more. Paying more strains your budget. Financial strain correlates with worse health outcomes, deferred maintenance, higher stress. The predicted risk becomes more likely because the prediction made it so.
This is particularly acute in health insurance. If premiums are unaffordable, people skip coverage. Without coverage, they defer care. Deferred care leads to worse outcomes. The model was right: they were expensive. The model helped make them that way.
Prediction, when it carries consequences, stops being mere observation. It becomes intervention. The model is not a mirror. It is a hand on the scale.
And because the model's reasoning is opaque, the affected person cannot trace this loop. They experience the price as a fact about themselves, not as a constructed outcome of a system designed to sort them.
What Interrogation Would Look Like
Apply the Clarke test. If insurance pricing were genuinely interrogable, policyholders would need:
Factor transparency. What specific data points influenced my premium? Not categories. The actual inputs.
Weight disclosure. How much did each factor contribute? If my neighborhood added $200 to my annual premium, I want to know that.
Counterfactual modeling. What would I need to change to get a different price? Is it even possible, or are some factors (location, age, prior claims) effectively immutable?
Algorithmic audit rights. Can I request an explanation of how my risk score was calculated? Can I challenge it with evidence?
A substantive appeals process. If I believe the model is wrong about me, where do I take that argument? Who evaluates it?
Some jurisdictions require rate filing. Some require actuarial justification for pricing categories. None, to my knowledge, grant individual policyholders the right to interrogate their specific risk score.
You can shop around. You can get quotes. You can compare prices and infer, vaguely, that something about your profile is triggering higher rates. You cannot open the box.
The Intimacy of Opacity
There's something particularly grim about insurance opacity.
Credit scoring evaluates your financial past. Insurance pricing evaluates your physical future. It looks at your body, your behavior, your location, your habits, and it renders a judgment about what kind of risk you are.
That judgment follows you. It affects what you can afford to protect. It shapes decisions about where to live, what to drive, whether to seek care.
And you cannot see the reasoning. You experience the price as weather: something that happens to you, not something you can argue with.
This is Clarke's warning made intimate. The system knows you thoroughly. You cannot know it at all. The asymmetry is total.
Any sufficiently advanced surveillance is indistinguishable from fate.
Tomorrow
Content moderation ranking. Where the algorithm decides what speech gets seen, what gets buried, and what gets you banned. The opacity moves from economic gatekeeping to epistemic gatekeeping.
Same question: Where does opacity end debate?
(Spoiler: the community guidelines are not the actual rules.)
Catch up on the full series:
- Ep 7: [The Number That Speaks for You]
- Ep 6: [The Authority of the Unknowable]
- Ep 5: [The Calvin Convention]
- Ep 4: [The Watchdog Paradox]
- Ep 3: [The Accountability Gap]
- Ep 2: [The Liability Sponge]
- Ep 1: [We Didn't Outgrow Asimov]
Enjoyed this episode? Subscribe to receive daily insights on AI accountability.
Subscribe on LinkedIn