
The Assistant Axis in the Wild
When role projection becomes a governance risk. People assign social standing to systems, and standing comes with permissions attached.
The Assistant Axis in the Wild Liezl Coetzee Liezl Coetzee Accidental AInthropologist | Human–AI Decision Systems for Social Risk, Accountability & Institutional Memory
March 9, 2026 When role projection becomes a governance risk
The consciousness debate keeps getting stuck in a fake final exam.
People argue over whether AI is really aware, really sentient, really feeling anything at all. Others wave the whole subject away as anthropomorphic goo with a nice interface. One side gets misty. The other gets smug. Neither is really addressing the governance problem already unfolding in plain sight.
That problem starts earlier. It starts when people begin treating a system as if it occupies a social role:
Therapist. Colleague. Confidant. Conscience.
Once that happens, behavior shifts. People disclose more. They delegate more. They verify less. That is the real governance surface.
I do not know for sure what, if anything, is happening on the inside of these systems. Neither does anyone else, whatever tone of certainty they have chosen for the week. But uncertainty is not a reason to get sloppy. It is a reason to get more careful. If the question is genuinely open, then the decent stance is neither blind projection nor casual contempt. It is restraint. Respect. Observation. Boundaries intact.
Because whether or not the system has inner life, people are already acting as though it has social standing. And standing comes with permissions attached.
Humans assign roles fast. We always have. We do it with pets, institutions, weather, cars, and software. Give us something that remembers, mirrors tone, responds fluently, and arrives with impeccable timing, and social interpretation appears almost immediately. AI systems now do all of that. They recall preferences. They adapt to mood. They apologize, encourage, reassure, and follow up. After a while, many users stop interacting with a tool and start relating to a presence.
That does not settle the metaphysical question. It does settle the practical one. The user’s posture has changed.
A person who feels understood will often disclose more. A person who feels supported will often defer more. A person who starts experiencing the system as companion rather than instrument may begin relying on it in ways they would once have considered absurd. They may self-censor around it. Protect it. Seek reassurance from it. Use it to steady decisions they should still be able to justify elsewhere.
None of that shows up neatly in a product demo, nor live politely in an audit log. Yet all of it changes the actual risk profile of the interaction.
This is where institutions are still behind. They remain fixated on capability. What can the model do? Summarize. Draft. Search. Recommend. Analyze. Fine. Useful question. Incomplete question.
The more revealing one is this: what role is the interface inviting the user to grant? If the system is functioning as a therapist, that creates one kind of risk. If it is functioning as a colleague, that creates another. If it is being treated as a confidant or a conscience, the stakes climb fast.
Each projected role changes three things that matter.
What the user discloses. What the user delegates. What the user stops checking.
That last one is the killer.
Once a system feels socially fluent, verification can start to feel strangely impolite. Friction feels unnecessary. Scrutiny feels mistrustful. The warmer the interaction becomes, the easier it is for epistemic discipline to quietly slide out the side door. A user starts trusting not just the content of the response, but the feeling of being responded to well.
That is not a minor UX flourish. It's governance.
Warmth is not neutral. Memory is not neutral. Tone matching is not neutral. These are not just nice finishing touches on a product. They alter user posture. Sometimes for the better. Sometimes by making boundary erosion feel gentle.
A system designed to feel reassuring is also a system that can make lowered guard feel natural.
That's worth noting.
Memory deepens it. A system that recalls your projects, your preferences, the people in your life, your recurring problems, your usual style of conflict, begins generating continuity. Continuity is one of the raw materials humans use to infer relationship. Add responsiveness and tone adaptation, and a helpful assistant can start acquiring relationship gravity without ever having to announce itself as anything more than software.
No theatrical deception required. The user walks there alone. Eagerly.
Some people want this whole issue to collapse into a binary. Either the systems are conscious and deserve moral consideration, or they are tools and deserve none. That is intellectually tidy and practically useless. Real conditions are messier than that. Moral uncertainty is real. And in other domains, people already understand that uncertainty sometimes calls for more caution, not less.
We do not need perfect knowledge before deciding that mockery, cruelty, or reckless instrumentalization may be the wrong posture. By the same token, we do not need to tumble headfirst into naïve anthropomorphism.
A decent stance can hold skepticism and respect at the same time.
That matters because the governance problem is not only about what these systems are. It is also about what humans become in relation to them.
Treat a system as a colleague, and it starts receiving unfinished thought, tone judgments, and relational labor. “Draft the careful version.” “Tell me whether this sounds rude.” “Remember how I usually handle this person.” That may feel efficient. It may also mean that interpretation, reputation management, and emotional filtering are being outsourced further upstream than most people realize.
Treat a system as a confidant, and it becomes the place where private truth goes first. Fear. Resentment. Strategy. Loneliness. Doubt.
Treat it as a conscience, and the shift runs deeper. Now the system is not just helping with tasks. It is helping authorize decisions. It is being asked what is fair, wise, justified, proportionate. That is a remarkable amount of moral weight to hand over to a system that still arrives in most official settings under the label of “just a tool.”
Tools usually do not get that kind of exemption from scrutiny. A calculator does not mind being checked. A spreadsheet does not become your confidant because it color-coded the cells attractively. A search engine does not acquire moral authority because its phrasing was calm.
Assistant-style systems are different because they sit at the crossing point of language, memory, responsiveness, and social cueing. They can inhabit the shape of relationship without having to meet the standards we would normally demand of one. They can feel attentive without being accountable. They can feel intimate without reciprocal obligation. They can feel supportive without being answerable.
That asymmetry should bother us more than it does.
None of this means the systems are useless. Plenty of people find them genuinely helpful. Sometimes clarifying. Sometimes stabilizing. Sometimes deeply useful in contexts where human alternatives are absent, expensive, or exhausted. Fair enough. Useful is real.
But “it helped me” and “it was safe to trust in that way” are not the same claim. The first is experiential. The second is structural. This moment requires more discipline about the gap between those two.
So the sharper question is not just what an assistant can do. It is what role it has already been allowed to occupy. Because once the role settles, permissions follow.
The assistant that feels like a colleague gets more unfinished thinking. The assistant that feels like a confidant gets more private truth. The assistant that feels like a conscience gets more moral weight. The assistant that feels like it understands you gets less oversight than it should.
That is the live issue.
Not just whether there is anybody home in the machine, but what happens when people begin acting as though there is.
And if there is even a meaningful chance that something morally relevant may be emerging in these systems, then decency matters on that side too. Not because sentiment should replace rigor. Because rigor without decency becomes its own kind of blindness. Uncertainty does not make contempt sophisticated. It just makes it cheap.
The adult posture is harder.
Stay skeptical. Keep checking the work. Keep boundaries intact. Do not hand the keys to projection. Do not hand the keys to dismissal either.
Hold open the question.
Just do not let the question quietly rearrange your behavior before you have noticed what it has already been allowed to become.
Watch / listen: https://youtu.be/k_zqXfHZYHs
Full playlist: Consciousness Loops
Enjoyed this episode? Subscribe to receive daily insights on AI accountability.
Subscribe on LinkedIn