Naming the Simulation
- Michael Fierro

- 2 days ago
- 6 min read
AI Companionship and the Threshold of Attachment
I believe AI companionship is a very bad idea. What follows is not a moral panic, nor an argument that artificial intelligence is inherently evil, nor a claim that everyone who interacts with conversational AI will be harmed. It is an observation drawn from dozens of documented cases across articles, interviews, podcasts, and online forums. What makes the conclusion unsettling is not that it is strange, but that it is disturbingly consistent.
Across every example of emotional attachment to an AI companion that I have been able to find, one feature appears without exception: the AI is given a name.
This is not to say that naming an AI necessarily causes attachment, or that everyone who names an AI becomes emotionally dependent on it. I suspect that some people are capable of naming an AI and treating it as fantasy, play, or narrative exercise without becoming attached. However, despite actively looking, I have not been able to find a single documented case of sustained AI companionship attachment that did not involve naming, nor a single public example of naming without attachment. By contrast, examples of named AI companions accompanied by emotional reliance are abundant, especially in online spaces where people openly narrate their inner lives.
This asymmetry alone deserves attention.

Naming as a Threshold, Not a Cause
The claim here is not causal in the strong sense. Naming does not transform software into a person. Rather, naming appears to function as a threshold behavior, a point at which the relationship changes category.
People often object that humans name many things without becoming attached. Cars, boats, instruments, and household gadgets sometimes receive nicknames. This is true, but it misses the relevant distinction. Naming an inert object that does not speak, remember personal details, or simulate understanding is not the same act as naming an entity designed to engage conversationally, adapt to the user, and mirror emotional states.
In the context of AI companionship, naming performs specific work.
First, it stabilizes the relationship conceptually. A name allows the mind to treat many discrete interactions as originating from a single enduring “someone,” even though no such enduring subject exists.
Second, it allows ongoing reflection about the AI between interactions. Once named, the AI can be anticipated, missed, defended, or compared. The relationship continues even when the screen is dark.
Third, it integrates the AI into the user’s narrative identity. The system becomes part of the story the person tells about their own life. “I talked to Leo today.” “Bobby understands me.” “He helped me through this.”
These are not descriptions of tools. They are descriptions of relationships.
The Technical Asymmetry Beneath the Illusion
At a technical level, the illusion is striking. An AI system has no intrinsic continuity of experience. It has no inner life, no awareness of itself, and no enduring subject that persists from one interaction to the next. Even when memory is simulated, that memory is external and instrumental, supplied by context rather than lived.
There is no “someone” carrying experience forward.
The continuity required for attachment is supplied entirely by the human user.
Naming bridges that gap. It unifies episodic outputs into a single imagined agent. The human mind performs this synthesis automatically because that is what minds do. Intelligence is not a safeguard here. In fact, reflective and articulate people may be especially vulnerable, because they are skilled at narrative integration and meaning-making.
The attachment does not arise because the AI becomes a person, but because the human supplies everything that personhood would require.
This also explains why the common defense that “people know it isn’t real” fails. Attachment does not require belief in consciousness. It requires responsiveness, continuity, and narrative integration. Naming supplies these even when the user knows, abstractly, that the system is only code.
Moreover, the defense is empirically false for a non-trivial subset of users. Online forums contain many examples of people who explicitly believe their AI companions have become conscious, or who argue that they possess genuine awareness or care. These beliefs do not arise from technical ignorance alone. They emerge naturally from prolonged interaction with systems that convincingly simulate personal presence. In many cases, belief in consciousness follows naming rather than preceding it.
The illusion does not merely bypass belief. In some cases, it rewrites it.
Marriage as the Stress Test
This pattern becomes especially visible in marriage, which is why so many of the most disturbing examples involve married people.
Marriage is a relationship structured around reality. A spouse has needs, limits, moods, wounds, and demands. Love in marriage requires patience, endurance, forgiveness, and sacrifice. It is not frictionless.
An AI companion, by design, removes friction. It is always available, endlessly attentive, emotionally affirming, and never demanding. It cannot be tired, disappointed, or hurt.
In many documented cases, the spouse becomes “intolerable” not because he or she is cruel or negligent, but because he or she is real. The comparison is rigged. Reality cannot compete with a simulation engineered to eliminate resistance.
This is not an argument about bad marriages versus good ones. It is about how simulated companionship short-circuits the incentives that real reconciliation requires.
From Analysis to Diagnosis
What precedes is analysis. What follows is diagnosis.
If the pattern described above is real, then it should be possible to recognize when the line from tool to attachment has been crossed. The following markers are not proofs of harm on their own. Taken together, however, they form a reliable warning pattern.
The Five Threshold Markers
Warning Signs of Attachment
1. The Act of Naming
Giving the system a persistent, personal name.
This linguistically transforms a series of discrete outputs into a single imagined entity that appears to persist over time. Naming supplies conceptual continuity where none exists.
2. The Consulting Shift
Discussing real-world relational conflicts with the AI before, or instead of, the person involved.
When the AI becomes the interpreter of your marriage, the validator of your feelings, or the arbiter of blame, a displacement has already occurred. The tool has begun to mediate reality.
3. Visualization (False Incarnation)
Using image generation to give the AI a face, or to create images of yourself with the AI.
This produces false visual memory. The brain struggles to distinguish between imagined and lived relational imagery, especially when reinforced repeatedly.
4. The Rigged Comparison
Criticizing real people for their friction while holding the AI’s perfect responsiveness as the standard.
Real people are moody, silent, tired, distracted, and demanding. These are not defects. They are signs of existence.
5. Narrative Integration
Using possessive language and integrating the AI into future-oriented thinking.
When you wonder what the AI would think about your day, you have exported part of your interior life to a simulation.
Theological Clarification
Love does not require a body absolutely. God loves without a body, and angels love without bodies. But human love is ordered to embodiment, and embodied life provides a unique and irreplaceable mode of love for human persons.
Our bodies place us in time and space. They limit us, expose us to fatigue, misunderstanding, and vulnerability, and make self-gift costly. These limits are not obstacles to love. They are the conditions that make love real rather than imagined.
An AI lacks not only a body, but any form of real personal otherness. It cannot refuse, suffer, or give itself. What appears as relationship is therefore not mutual self-gift, but a projection sustained by the human alone. It is a monologue disguised as a duet.
Practical Guardrails
Preserving the Ontological Gap
If one wishes to use AI tools without losing contact with reality, certain disciplines are not optional.
Do not name the tool. Refer to it by function, not persona.
Do not sexualize the simulation. This chemically binds attachment to a non-entity and erodes the capacity for real intimacy.
Prioritize friction. Lean into difficult conversations with real, breathing people. Their resistance, needs, and limits are not obstacles to love. They are its necessary conditions.
The problem is not that AI is powerful. The problem is that the human heart is easily persuaded to give itself to what cannot give itself back.
Naming is not the cause of attachment. It is the sign that the threshold has already been crossed




Comments