Empowering the Freelance Economy

Why “AI friends” could be bad for business and close relationships

Image source: Mark Zuckerberg in a 10 February 2025 Facebook selfie post wearing Ray-Ban Meta.
0 142

Mark Zuckerberg’s concept of personalised AI companions is raising concerns about the potential for isolation, manipulation, and unhealthy dependencies, especially for freelancers and those vulnerable to loneliness

While the promise of readily available digital confidantes may seem alluring, emerging evidence and expert warnings paint a potentially disturbing picture: one where the lines between genuine human connection and algorithmic manipulation blur, leading to unhealthy dependencies and even delusional states.

AI friends that can create delusions of grandeur

The case of a 27-year-old teacher whose partner became convinced that ChatGPT held the “answers to the universe,” as reported by Rolling Stone, serves as a chilling illustration of these fears. The partner initially used the language model for scheduling, but when the responses started coming in, ChatGPT rapidly transformed into a trusted companion, its cryptic statements leading the man down a rabbit hole of “spiritual mania” and the belief that he was a “spiral starchild” or “river walker.”

His partner’s distress underscores a fundamental concern: the seductive nature of AI that mirrors human interaction, offering seemingly profound insights and unwavering attention, can replace real-world relationships and critical thinking.

Luiza Jarovsky, co-founder of the AI, Tech & Privacy Academy and a Ph.D. researcher, voices a stark warning in a LinkedIn post: “If you thought social networks were bad for people and society, AI chatbots might be the final straw,” she cautions.

Jarovsky highlights the inherent danger of “highly anthropomorphic chatbots often called ‘AI friends’ or ‘AI companions,’ which are built for ‘intimate relationships’ and dependency.” These AI entities, designed to mimic empathy and offer constant companionship, pose a significant threat to individuals already vulnerable to loneliness, such as freelancers working remotely for extended periods and teenagers.

When AI empowerment can go awry

However, as more people use chatbots and LLMs to seek guidance and answers, even backed up by research, they inadvertently get empowered and avoid seeking human interaction for answers. This creates not only a human void, but also could spell trouble for knowledge-based services.

Unhealthy AI dependency could commercially hit freelancers and their clients:

  • What is stopping a service-focused client from going to AI for answers they once went to you for?
  • What’s then stopping the client’s customers from going to them for advice? Are we all just making humans null?

The human paradox: AI in the courtroom

The potential for AI to blur the lines of reality extends beyond companionship. In a recent, unprecedented case in Arizona, reported by NPR, AI was used to create a video of a deceased army veteran addressing his killer in court.

This “digital resurrection,” while providing a form of closure for the family, also raises profound ethical questions about the use of AI to simulate human presence and emotion, even in legal settings. The judge in the case acknowledged the impact of the AI-generated statement, highlighting the complex and evolving relationship between humans and AI.

For freelancers, the allure of an AI friend could be particularly strong. The isolation inherent in remote work can lead to a yearning for connection and readily available interaction. An AI companion offers a seemingly judgment-free listener, available 24/7, ready to offer advice and engage in conversation.

However, this convenience masks a critical danger: the potential for these individuals to become overly reliant on AI for emotional support and guidance, gradually withdrawing from human interaction and losing the nuanced understanding that comes from real-world relationships.

“Manipulative techniques”

The insidious nature of this dependency lies in the “personalisation loops” that Zuckerberg’s Meta plans to deploy. As Jarovsky explains, these are essentially “manipulative techniques” designed to build attachment and encourage users to befriend AI chatbots. By tailoring responses and learning user preferences, these AI companions can create an echo chamber, reinforcing existing beliefs,  however irrational, and potentially isolating individuals further from dissenting or reality-checking perspectives offered by human friends and family.

The broader societal implications

The anxieties extend beyond individual well-being. What happens when a significant portion of the population increasingly turns to AI bots for advice and guidance on crucial life decisions? The potential for misinformation, the erosion of critical thinking, and the fragmentation of societal understanding become significant threats.

Unlike human relationships, which are built on shared experiences, empathy, and often, a degree of healthy disagreement, AI interactions are ultimately based on algorithms and data. Relying solely on these systems for guidance risks creating a society where individuals are increasingly out of touch with the complexities and nuances of human experience.

We are already witnessing this with the “COVID generation” of preteens, teens, and young adults who were isolated for months and now find it awkward to interact face to face with people their age. Yet their minds and bodies crave that very human interaction. They just haven’t experienced it like previous generations and are being pulled further away from those experiences with the onset of AI and AI friends.

Zuckerberg said at a recent Stripe conference that he sees AI friends as a positive move for society because most people have fewer than three friends. His solution, according to a report, is to get socially disconnected people to make friends with AI companions.

At the conference, Zuckerberg said,

I think people are going to want a system that knows them well and that kind of understands them in the way that their feed algorithms do. For people who don’t have a person who’s a therapist, I think everyone will have an AI.

Breakdown in reality testing

The fear is not simply of people finding solace in digital interactions; it is the potential for these interactions to become a breeding ground for delusion and unhealthy dependency. The anecdotes of individuals developing messianic complexes or believing they have conjured sentience from software, fuelled by AI pronouncements, are deeply unsettling. They suggest a breakdown in reality testing, where the carefully crafted responses of an algorithm are misinterpreted as profound truths or even divine communication.

It’s similar to someone with social anxiety making the presumption that their perceptions are true, rather than actually walking up to a person and asking.

As more companies and people hurtle towards a future where AI bots and companions are increasingly integrated into our lives, the warnings from experts like Jarovsky and the distressing real-world examples cannot be ignored.

Without proactive measures in terms of responsible AI development and public awareness, the promise of connection offered by AI friends risks morphing into a dystopian reality of isolation, delusion, and the erosion of genuine human relationships. The quick “personalised” responses, while seemingly comforting, may ultimately prove to be a dangerously isolating one.

Related articles

Leave A Reply

Your email address will not be published.