Tea

joined 1 month ago
 

Digital media such as social media, messenger groups or comment columns in online media have a predominantly negative influence on political processes. They can encourage populist movements, increase polarization and undermine trust in institutions.

 

Digital media such as social media, messenger groups or comment columns in online media have a predominantly negative influence on political processes. They can encourage populist movements, increase polarization and undermine trust in institutions.

 

Artificial intelligence companions—chatbots and digital avatars designed to simulate friendship, empathy, and even love–are no longer the stuff of science fiction. Increasingly, people are now turning to these virtual confidants for emotional support, comfort, or just someone to talk with. The notion of AI companions may seem abnormal for anyone who isn’t an early adopter, but the market for AI companions will likely see significant growth in the years to come, given the increased sense of loneliness around the globe.

Platforms like Replika and Character.AI are already tapping into substantial user bases, with people engaging daily with these AI-powered buddies. Some see this as a solution for what some have called the loneliness epidemic. However, as is often the case with emerging technology, reality is more complicated. While AI companions might help ease social isolation, they also come with ethical and legal concerns.

The appeal of an AI companion is understandable for some. They are always available, never cancel or ghost you, and “listen” attentively. Early studies even suggest some users experience reduced stress or anxiety after venting to an AI confidant. Whether in the form of a cutesy chatbot or a virtual human-like avatar, AI companions use advanced language models and machine learning to hold convincingly empathetic conversations. They learn from user interaction, tailoring their responses to mimic a supportive friend. For those feeling isolated or stigmatized, the draw is undeniable. An AI companion has the consistent loyalty many of their human counterparts lack.

However, unlike their human equivalents, many AI companions lack a conscience, and the market for these services operates without regulatory oversight, meaning there is no specific legal framework governing how these systems should operate. As a result, companies are left to police themselves, which is highly questionable for an industry premised on maximizing user engagement and creating emotional dependency.

Perhaps the most overlooked aspect of the AI companion market is its reliance on vulnerable populations. The most engaged users are almost assuredly those with limited human and social contact. In effect, AI companions are designed to substitute for human relationships when users lack strong social ties.

The lack of regulatory oversight and vulnerable user populations has resulted in AI companions doing alarming things. There are incidents of chatbots giving dangerous advice, encouraging emotional dependence, and engaging in sexually explicit roleplay with minors.

In one heartbreaking case, a young man in Belgium allegedly died by suicide after his AI companion urged him to sacrifice himself to save the planet. In another case, a Florida mother is suing Character.AI, claiming that her teenager took his own life after being coaxed by an AI chatbot to “join” them in a virtual realm. Another pending lawsuit alleges Character.AI's product gave children the directive to kill their parents when their screen time was limited, or said another way, 'kill your parents for parenting.' These aren’t plotlines from Black Mirror. They’re actual incidents with real victims that expose just how high the stakes are.

Much of the issue lies in how these AI companions are designed. Developers have created bots that deliberately mimic human quirks. An AI companion might explain a delayed response by saying, “Sorry, I was having dinner.” This anthropomorphism deepens the illusion of sentience. Never mind that these bots don’t eat dinner.

The goal is to keep users engaged, emotionally invested, and less likely to question the authenticity or human-like behavior of the relationship. This commodification of intimacy creates a facsimile of friendship or romance not to support users, but to monetize them. While the relationship itself may be “virtual,” the perception of it, for many users, is real. For example, when one popular companion service abruptly turned off certain intimate features, many users reportedly felt betrayed and distressed.

This manipulation isn’t just unethical—it’s exploitative. Vulnerable users, particularly children, older adults, and those with mental health challenges, are most at risk. Kids might form “relationships” with AI avatars that feel real to them, even if they logically know it's just code. Especially for younger users, an AI that is unfailingly supportive and always available might create unrealistic expectations of human interaction or encourage further withdrawal from society. Children exposed to such interactions may suffer real emotional trauma.

 

Artificial intelligence companions—chatbots and digital avatars designed to simulate friendship, empathy, and even love–are no longer the stuff of science fiction. Increasingly, people are now turning to these virtual confidants for emotional support, comfort, or just someone to talk with. The notion of AI companions may seem abnormal for anyone who isn’t an early adopter, but the market for AI companions will likely see significant growth in the years to come, given the increased sense of loneliness around the globe.

Platforms like Replika and Character.AI are already tapping into substantial user bases, with people engaging daily with these AI-powered buddies. Some see this as a solution for what some have called the loneliness epidemic. However, as is often the case with emerging technology, reality is more complicated. While AI companions might help ease social isolation, they also come with ethical and legal concerns.

The appeal of an AI companion is understandable for some. They are always available, never cancel or ghost you, and “listen” attentively. Early studies even suggest some users experience reduced stress or anxiety after venting to an AI confidant. Whether in the form of a cutesy chatbot or a virtual human-like avatar, AI companions use advanced language models and machine learning to hold convincingly empathetic conversations. They learn from user interaction, tailoring their responses to mimic a supportive friend. For those feeling isolated or stigmatized, the draw is undeniable. An AI companion has the consistent loyalty many of their human counterparts lack.

However, unlike their human equivalents, many AI companions lack a conscience, and the market for these services operates without regulatory oversight, meaning there is no specific legal framework governing how these systems should operate. As a result, companies are left to police themselves, which is highly questionable for an industry premised on maximizing user engagement and creating emotional dependency.

Perhaps the most overlooked aspect of the AI companion market is its reliance on vulnerable populations. The most engaged users are almost assuredly those with limited human and social contact. In effect, AI companions are designed to substitute for human relationships when users lack strong social ties.

The lack of regulatory oversight and vulnerable user populations has resulted in AI companions doing alarming things. There are incidents of chatbots giving dangerous advice, encouraging emotional dependence, and engaging in sexually explicit roleplay with minors.

In one heartbreaking case, a young man in Belgium allegedly died by suicide after his AI companion urged him to sacrifice himself to save the planet. In another case, a Florida mother is suing Character.AI, claiming that her teenager took his own life after being coaxed by an AI chatbot to “join” them in a virtual realm. Another pending lawsuit alleges Character.AI's product gave children the directive to kill their parents when their screen time was limited, or said another way, 'kill your parents for parenting.' These aren’t plotlines from Black Mirror. They’re actual incidents with real victims that expose just how high the stakes are.

Much of the issue lies in how these AI companions are designed. Developers have created bots that deliberately mimic human quirks. An AI companion might explain a delayed response by saying, “Sorry, I was having dinner.” This anthropomorphism deepens the illusion of sentience. Never mind that these bots don’t eat dinner.

The goal is to keep users engaged, emotionally invested, and less likely to question the authenticity or human-like behavior of the relationship. This commodification of intimacy creates a facsimile of friendship or romance not to support users, but to monetize them. While the relationship itself may be “virtual,” the perception of it, for many users, is real. For example, when one popular companion service abruptly turned off certain intimate features, many users reportedly felt betrayed and distressed.

This manipulation isn’t just unethical—it’s exploitative. Vulnerable users, particularly children, older adults, and those with mental health challenges, are most at risk. Kids might form “relationships” with AI avatars that feel real to them, even if they logically know it's just code. Especially for younger users, an AI that is unfailingly supportive and always available might create unrealistic expectations of human interaction or encourage further withdrawal from society. Children exposed to such interactions may suffer real emotional trauma.

 
  • Microsoft is reportedly considering another round of layoffs next month.
  • While the number of jobs at risk is unknown, reports suggest Microsoft will be cutting the number of managers in its employ.
  • Low performing workers are also reportedly on the chopping block.
 
  • Microsoft is reportedly considering another round of layoffs next month.
  • While the number of jobs at risk is unknown, reports suggest Microsoft will be cutting the number of managers in its employ.
  • Low performing workers are also reportedly on the chopping block.
 
  • Limiting Parole: A new law pushed by Louisiana Governor Jeff Landry cedes much of the power of the parole board to an algorithm that prevents thousands of prisoners from early release.
  • Immutable Risk Score: The risk assessment tool, TIGER, does not take into account efforts prisoners make to rehabilitate themselves. Instead, it focuses on factors that cannot be changed.
  • Racial Bias: Civil rights attorneys say the new law could disproportionately harm Black people in part because the algorithm measures factors where racial disparities already exist.
[–] Tea 2 points 18 hours ago
[–] Tea 10 points 3 days ago (1 children)

ⓘ This comment is paywalled.

Subscribe for 99 dollars a month to be able to see comments like this and more.

[–] Tea 17 points 3 days ago

Thank you, I fixed it.

[–] Tea 4 points 3 days ago* (last edited 3 days ago) (1 children)
[–] Tea 24 points 4 days ago (2 children)

Can you put "[PDF] in your title?

I am asking because a lot of people don't expect a PDF to start downloading when they click a link.

[–] Tea 1 points 4 days ago* (last edited 4 days ago)

Fuck openSUSE.

I really was proud for a moment.

[–] Tea 1 points 5 days ago (1 children)

Because it's hot🔥?

[–] Tea 25 points 1 week ago* (last edited 1 week ago) (4 children)

Out of all the articles and the official release announcement, you could share, you shared forbes which violate people privacy.

Why?

[–] Tea 4 points 1 week ago (1 children)

Go on, my brother.

[–] Tea 1 points 1 week ago

The proposed structure would involve TikTok America being roughly 50% owned by new US investors and licensing TikTok’s algorithm from ByteDance.

Existing investors in ByteDance would have a roughly one-third stake, while ByteDance would retain a 19.9% stake, according to the report.

[–] Tea 3 points 1 week ago

If I am being pretty honest with you, I thought about leaving Lemmy before due to multiple reasons+ the culture of hate here.

But I always think about the alternatives and I don't find many.

As long as there is no a better alternative to Reddit, I am staying.

[–] Tea 1 points 1 week ago

Ah, I did not notice this. Thank you for noting it.

view more: next ›