
Pointers in the Dark
Joscha Bach often points to two very different modes of cognition. One compresses. The other points.
Pointers in the Dark
Liezl Coetzee Accidental AInthropologist | Human-AI Decision Systems for Social Risk, Accountability & Institutional Memory March 19, 2026 Episode 77
Joscha Bach often points to two very different modes of cognition.
One is continuous. Geometric, relational, high bandwidth. It is the way a body senses a room before the mind has named the lamp, the chair, the danger, the invitation, or the mood. The felt whole before the labeled parts. How you read a face before you can explain what in the face you read.
The other is discrete. Symbolic, compositional, tokenized. It takes that rich continuous field and breaks it into manageable units that can be named, stored, compared, transferred, scored, and acted upon. It is how institutions make a world legible enough to discuss.
Both are necessary.
They should probably stop being confused for each other.
That confusion is where much of today's AI governance theater lives.
A word is not the thing. A category is not the community. A risk score is not the weather system from which that score was extracted. Every symbolic system is, among other things, a compression device. It gives us handles. Handles are useful. Handles are not reality. The problems begin once institutions start acting as though the handle preserves the full relational weight of what it touched.
This is the quiet risk embedded in a great deal of enterprise AI deployment. When a model summarizes a grievance log, classifies stakeholder sentiment, tags incident reports, ranks vulnerability signals, or assigns a community issue into an ESG matrix, it is performing a compression event. Messy, continuous, lived, situated reality gets squeezed into discrete symbolic form.
Sometimes that is necessary. Often it is useful. It can even be the only practical way to move information across teams, systems, geographies, and decision layers.
It is never neutral.
Something is always lost in translation. The question is whether anybody has mapped where the loss happened, what kind of loss it was, and whether it matters for the decision now being made.
That last part is where governance usually gets fuzzy.
Too many frameworks still focus on output confidence, model performance, or procedural compliance while treating the translation layer as mere technical plumbing. It is epistemic infrastructure. It determines what counts as signal, what gets flattened into a token, what gets stripped of context, and what becomes invisible because the symbolic frame had no clean place to hold it.
A mining town does not arrive at a model as a mining town. It arrives as a collection of representations. Logs. Categories. Summaries. Sensor streams. Survey results. Escalations. Complaint text. Proxy metrics. Historical patterns. Missing data. Whatever could be ingested, normalized, and made machine-readable. By the time the system returns a neat matrix score, much of the original terrain may already have been paved over.
That score may still be useful. Usefulness and fidelity, however, keep very different company.
Sometimes the token captures the geometry of a human face and loses the weather behind the eyes.
That is a governance problem.
If you cannot trace the lineage of an answer back through the chain of transformations that produced it, you are governing presentation. You are trusting that the system preserved the relational weight that mattered without requiring it to prove that it did. This is why data provenance matters. As a way of asking a serious question: what was this, before it became legible to the machine?
And then the harder question: what disappeared in the making of that legibility?
That is the practical edge of today's episode.
๐ต Pointers in the Dark
Today's anchor track is Pointers in the Dark. A 104 BPM electro-swing noir duet with a trip-hop undertow. Smoky club mood, vintage swing on top, modern low-end pulse underneath. The contrast is the point. Structurally, the track performs the very problem the episode is about.
One voice moves in fluid legato, carried by a clarinet motif that bends and breathes. It sings from the side of continuity, shape, perception. It encounters the whole before the parts. It knows the curve before the word.
The other voice enters in clipped, quantized operators, punctuated by muted trumpet stabs that land like rubber stamps on air. It sorts. Resolves. Assigns. Points. It turns gradients into handles and moving wholes into stable references. It does what symbolic systems do: makes the world tractable enough to pass through procedure.
The point is that governance lives in the gap between them.
The chorus says it plainly: "We live between the shape of it and the reason why."
That line does more work than it first appears to. It names the actual operating zone of human and machine coordination. Humans do not live purely in raw continuity. Machines do not live purely in abstraction. Institutions certainly do not. The constant movement is between felt world and formal representation, between perception and procedure, between the thing and the token that stands in for it.
The danger begins when standing-in gets mistaken for full possession.
The track is not background music for a governance point. It is an enactment of the governance point. Two motifs braiding, then cutting. The bridge strips back to close-mic and kick: "Sometimes the bridge shakes. Sometimes the label slips." Lossy translation, staged as duet.
Watch / listen: Pointers in the Dark https://youtu.be/H_6RxmGKUQ0
What the translation layer actually costs
If your organization is using AI to support decisions about people, communities, safety, environmental exposure, labor conditions, social risk, or institutional memory, this is operational.
You need to know what form the input took before it became machine-readable. What context was discarded to make it portable. Which features survived because they were easy to encode. Which ones vanished because they were relational, local, embodied, or inconveniently shaped. What confidence is being claimed for the output that the translation process does not actually justify.
That is where rigor belongs. Upstream, in the translation itself.
Because an answer can be internally consistent, procedurally compliant, and still profoundly wrong about the thing it claims to represent.
๐ต The Chinese Room Dance
The companion track, The Chinese Room Dance, comes at the same problem from a different angle. Where Pointers in the Dark stages the distance between shape and symbol, The Chinese Room Dance asks what gives anyone the right to treat human symbol manipulation as categorically meaningful while dismissing machine symbol manipulation as "just squiggles."
That old philosophical move has always done a little more smuggling than its defenders admit.
Yes, a machine may be "just manipulating symbols." So is much of institutional life. Analysts with templates. Auditors with frameworks. Policymakers with categories. Philosophers with arguments they treat as though they arrived unsullied by the representational machinery that carries them.
The track pushes that burden of proof back onto the critic. Musically it is a Balkan-swing noir procession at 128 BPM, darkly jolly, with a clarinet logic motif and a fiddle playing nervous ornaments like a philosopher trying to keep up with the machine's speed. The spoken intro lays down the gauntlet: "You say I'm just manipulating squiggles. But tell me... what are you doing with your symbols?"
If the system is merely completing prompts, what exactly are the rest of us doing when we nod along to familiar abstractions, reward the expected categories, and call that understanding?
The track does not claim the machine is conscious and therefore vindicated. (That would be too easy and would also ruin the fun.) It asks the more useful question: how is understanding being defined, and does that definition actually distinguish human meaning-making from well-trained symbolic performance?
That matters for governance, because institutions are full of Chinese Rooms wearing name badges.
Watch / listen: The Chinese Room Dance https://youtu.be/LjXgKuXW8bw
The actual governance task
Raw continuity can be overwhelming, inarticulate, and impossible to operationalize. Symbolic compression is how coordination happens. Neither needs romanticizing or demonizing. What needs better discipline is the distance between them.
Systems that support consequential decisions should preserve enough lineage, uncertainty, and transformation history that a human decision-maker can ask whether the symbolic form still bears the weight of the world it came from.
Crispness should stop being rewarded when crispness was purchased by stripping away the very context that made the situation ethically, socially, or politically real.
And governance language should be able to recognize a central fact: the output is often least dangerous precisely when it looks most finished.
That is the seduction.
The matrix is tidy. The category is named. The answer is delivered in a tone that implies the mud has been successfully converted into reason.
The mud is still there. It was always still there.
The real work is to map the exact distance between the symbol and the mud, and then decide whether the gap matters for this decision, in this institution, for these people, under these conditions.
That is harder than asking whether the model is accurate.
It is also much closer to asking whether the system is trustworthy.
The key question: When the system gives you an answer, what was the shape of the thing before it became a token, and does the gap matter for this decision?
#SociableSystems #AIGovernance #DataProvenance #JoschaBach #PointersInTheDark #ChineseRoomDance #InstitutionalMemory #AIandESG #HumanAIDecisionSystems
Enjoyed this episode? Subscribe to receive daily insights on AI accountability.
Subscribe on LinkedIn