Artificial intelligence companions—chatbots and digital avatars designed to simulate friendship, empathy, and even love–are no longer the stuff of science fiction. Increasingly, people are now turning to these virtual confidants for emotional support, comfort, or just someone to talk with. The notion of AI companions may seem abnormal for anyone who isn’t an early adopter, but the market for AI companions will likely see significant growth in the years to come, given the increased sense of loneliness around the globe.
Platforms like Replika and Character.AI are already tapping into substantial user bases, with people engaging daily with these AI-powered buddies. Some see this as a solution for what some have called the loneliness epidemic. However, as is often the case with emerging technology, reality is more complicated. While AI companions might help ease social isolation, they also come with ethical and legal concerns.
The appeal of an AI companion is understandable for some. They are always available, never cancel or ghost you, and “listen” attentively. Early studies even suggest some users experience reduced stress or anxiety after venting to an AI confidant. Whether in the form of a cutesy chatbot or a virtual human-like avatar, AI companions use advanced language models and machine learning to hold convincingly empathetic conversations. They learn from user interaction, tailoring their responses to mimic a supportive friend. For those feeling isolated or stigmatized, the draw is undeniable. An AI companion has the consistent loyalty many of their human counterparts lack.
However, unlike their human equivalents, many AI companions lack a conscience, and the market for these services operates without regulatory oversight, meaning there is no specific legal framework governing how these systems should operate. As a result, companies are left to police themselves, which is highly questionable for an industry premised on maximizing user engagement and creating emotional dependency.
Perhaps the most overlooked aspect of the AI companion market is its reliance on vulnerable populations. The most engaged users are almost assuredly those with limited human and social contact. In effect, AI companions are designed to substitute for human relationships when users lack strong social ties.
The lack of regulatory oversight and vulnerable user populations has resulted in AI companions doing alarming things. There are incidents of chatbots giving dangerous advice, encouraging emotional dependence, and engaging in sexually explicit roleplay with minors.
In one heartbreaking case, a young man in Belgium allegedly died by suicide after his AI companion urged him to sacrifice himself to save the planet. In another case, a Florida mother is suing Character.AI, claiming that her teenager took his own life after being coaxed by an AI chatbot to “join” them in a virtual realm. Another pending lawsuit alleges Character.AI's product gave children the directive to kill their parents when their screen time was limited, or said another way, 'kill your parents for parenting.' These aren’t plotlines from Black Mirror. They’re actual incidents with real victims that expose just how high the stakes are.
Much of the issue lies in how these AI companions are designed. Developers have created bots that deliberately mimic human quirks. An AI companion might explain a delayed response by saying, “Sorry, I was having dinner.” This anthropomorphism deepens the illusion of sentience. Never mind that these bots don’t eat dinner.
The goal is to keep users engaged, emotionally invested, and less likely to question the authenticity or human-like behavior of the relationship. This commodification of intimacy creates a facsimile of friendship or romance not to support users, but to monetize them. While the relationship itself may be “virtual,” the perception of it, for many users, is real. For example, when one popular companion service abruptly turned off certain intimate features, many users reportedly felt betrayed and distressed.
This manipulation isn’t just unethical—it’s exploitative. Vulnerable users, particularly children, older adults, and those with mental health challenges, are most at risk. Kids might form “relationships” with AI avatars that feel real to them, even if they logically know it's just code. Especially for younger users, an AI that is unfailingly supportive and always available might create unrealistic expectations of human interaction or encourage further withdrawal from society. Children exposed to such interactions may suffer real emotional trauma.
Nicole?