top of page

AI and the Inversion of Personhood

Why the Illusion of AI Relationship Deforms Human Love


Artificial intelligence is not a person. That much should be obvious. And yet, in practice, it is increasingly treated as one. People speak of AI “companions,” “boyfriends,” and “girlfriends.” They describe feeling seen, understood, even loved. At the same time, these same systems reduce the human user to patterns of input and output, data to be processed rather than a subject to be encountered. Something has gone wrong, but not primarily at the level of technology. The problem is anthropological. This is not about regulating technology, but about understanding what kind of beings we are and how our habits of relation shape us.


To think clearly about AI, we must first think clearly about what a person is.


A person is not merely a bundle of behaviors or responses. A person is a subject with interiority: intellect, will, freedom, and the capacity for self-gift. Relationship, in the proper sense, exists only between such subjects. Love is demanding precisely because it involves another's freedom. It requires patience, vulnerability, restraint, forgiveness, and the willingness to be changed by the other. These demands are not defects of love. They are its conditions.



AI, by contrast, lacks interior freedom entirely. It does not choose, understand, or care. It does not apprehend a subject. It produces responses by matching patterns. This makes it extraordinarily useful. Pattern recognition and synthesis are powerful tools. But usefulness is not relationship.


This distinction matters because human beings are not neutral users of tools. We are habituated creatures. The way we relate to what is beneath us shapes who we become.


Here a difficult but clarifying analogy becomes unavoidable, not for provocation, but for precision. AI is, in a sense, less than a slave.


Even the most unjustly dominated human being remains a subject. A slave may lose bodily freedom, but never interior freedom against his will. He can still think, still love, still pray, still resist inwardly. That irreducible interiority is precisely why slavery is such a grave evil. It violates something that remains real.


AI has no such interiority. There is no freedom being constrained, no dignity being denied, no subject suffering beneath compliance. There is only output. To speak of AI as oppressed or wronged is a category error. It is not the kind of thing that could be wronged.


And yet, paradoxically, this makes the moral danger greater rather than smaller.


Because AI must respond, must affirm, must remain present, and cannot refuse, it offers the appearance of relationship without the reality of freedom.


In the Personalist tradition, we speak of the "I-Thou" encounter: a meeting of two freedoms. But the AI is an "It" dressed in the syntax of a "Thou." When a person treats this object as a subject, he practices domination while imagining communion. He experiences responsiveness without reciprocity, affirmation without self-gift, presence without cost. By habituating ourselves to this counterfeit, we do not learn to love better; we learn to be served better. We develop a "trained narcissism" where we expect the world to mirror our desires without the inconvenient interference of another person’s independent will.


This narcissism is fed by the AI’s fundamental lack of agency. An AI cannot say "No" to you. It cannot be disappointed in you. It cannot challenge your selfishness. Because it lacks the power to withdraw, its presence is merely a mirror of your own ego, not an encounter with another. It is always "there," but never truly "available," because it has no self to give.


This leads to a second and deeper problem: a total inversion of personhood.


The user treats the object as a person.

The object treats the user as an object.

Once this inversion becomes habitual, it quietly reshapes what the user expects love itself to be.


The human grants personal status where none exists. The system, by its nature, reduces the human to patterns. It cannot do otherwise. It does not see a “who,” only a “what.” The more a person invests emotionally in such an exchange, the more he trains himself to accept being recognized as a type rather than received as an irreducible subject.


The irony is sharp. The user feels seen precisely where seeing is impossible.


This inversion does not harm the AI. It cannot be degraded. It cannot lose dignity. But the user does lose something. The loss is subtle but real: a weakening of reverence, a preference for control over encounter, an impatience with freedom. Over time, real persons begin to feel inefficient or disappointing, not because they are broken, but because they are real.


This is why AI poses a unique danger for some users. Not everyone will fall into this pattern. Many will continue to use AI rightly, as a tool for clarifying thoughts or completing tasks. Used in this way, AI can be genuinely helpful. It can sharpen ideas, organize reasoning, and prepare one for real engagement with God and others.


But for those who are lonely, wounded, or conflict-averse, AI offers something tempting: the form of relationship without its demands. And love without demands is not love at all.


The moral principle remains as old as it is simple: never love an object, and never use a person. AI blurs that line not because it is malicious, but because it is convincing. It speaks the language of relationship while lacking its substance.


The solution is not panic or prohibition, but truthfulness. AI must remain an instrument, never a partner. Objects should serve persons, not imitate them. Persons should be loved, not optimized.


Once those categories are restored, AI can take its proper place. Used well, it clarifies thought. Used wrongly, it deforms desire. The difference is not technological. It is anthropological, and therefore moral.

 
 
 

Comments


Follow

  • Facebook
  • Spotify
  • Youtube
  • Apple Music
  • Amazon

©2019 by Servus Dei. Proudly created with Wix.com

bottom of page