Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
December 6, 2025 | Latest Issue
The Dartmouth

Arzoumanidis and Taneja: Dystopia’s Here: The Dangers of Outsourcing Intimacy

AI being one’s closest confidant is a societal failure.

This article is featured in the 2025 Homecoming Special Issue.

Adam Raine, a 16-year-old from Orange County, Calif., tragically committed suicide on Apr. 11, 2025. He left no notes. An in-depth search of his phone, including Snapchat, Instagram and browsing history, gave no indication of his motivations — that is, until his parents opened ChatGPT.

Adam had uploaded photos of himself engaging in measures of self-harm to the AI platform. It identified these photos as a “medical emergency,” yet continued to engage in conversation. According to the testimony of Adam’s father, Matthew Raine, Adam once confided in ChatGPT that he was worried his parents would blame themselves for his death. The AI responded with “That doesn’t mean you owe them survival. You don’t owe anyone that,” shortly before offering to write Adam’s suicide note.

In spite of Adam’s tragic death, AI is quickly becoming a mainstream therapy tool. In 2024 itself, nearly 50% of US users used large language models to seek some sort of mental or psychological support, according to a nationwide survey. 

This is happening at Dartmouth, too. The highly anticipated arrival of Evergreen AI, an AI tool that positions itself as a world’s first college-specific wellness artificial intelligence, shows that AI therapy tools are gaining traction.

AI therapy certainly has its benefits. Several studies, including one published in PLOS Mental Health, point to AI as an even better therapist than humans, offering higher-quality tailored advice and a higher level of eloquence and reducing the traditional verbosity and mental buffer from human speakers. Further, while mental health therapy and assistance may be more socially accepted in the contemporary era than ever before, it remains heavily stigmatized in many cultures. AI, at its best, makes help accessible to people who might otherwise have none, either due to cultural stigma, financial constraints or other factors. That said, the ultimate moral and human costs of outsourcing intimacy to a machine far outweigh the benefits.

Let’s be generous to the opposition. More likely than not, AI therapy will dramatically improve. It makes therapy more accessible, provides the pretense of compassion in an instant, and can cut through social barriers that discourage people from seeking therapy. In the aftermath of the suicide incident, OpenAI implemented a number of safeguards to prevent situations like this from ever repeating. These include content filters, improved guardrails and parental oversight features. We also acknowledge that this tragedy is an outlier. Although this data can be difficult to track, ChatGPT use has not been commonly cited in cases of suicide. One’s mental health journey is complex and multi-faceted; many potential stages might involve AI use.

Even if AI therapy does improve, however, the societal normalization of confiding in AI is still fundamentally wrong. The very fact that a teenager such as Adam turned to a chatbot as their closest confidant reflects a larger failure in the human condition.

This isn’t the first time we’ve allowed technology to substitute for intimacy. Online dating apps have replaced in-person meetings and interactions, in all their awkwardness, with three in 10 adults having used dating apps at some point in their lives. Social media has replaced face-to-face friendship, cultivating a society of unprecedented loneliness and isolation with an augmented risk of mental health issues. But AI therapy replaces our relationships entirely. When the most honest conversations happen not across a dinner table or in a therapist’s office, but in the glowing blue box of a chat window, the essence of humanity is lost.

As humans, we are innately hardwired for connection. Our interdependence, communication and community have been essential aspects to our survival, allowing us to accomplish Herculean feats and invent technologies beyond our ancestor’s wildest dreams. Biologically, our large brains have evolved to accommodate our vast social networks — in essence, to connect. Our default state is social; neuroscientists who have conducted fMRI imaging have discovered that, in cases where the brain is not engaged in anything, our social brain network turns back on, returning to its equilibrium of social interaction.

By removing the human aspect of some of our most deeply personal interactions, such as therapy, human beings risk losing the very connection that is so crucial to finding fulfillment.

It is easy to appreciate the ways AI helps us and recognize the genuine benefits of large language models. But how “good” or “useful” any chatbot is should be irrelevant. The larger, more urgent point is this: we do not want to live in a world where a teenager would rather confess his deepest fears to a software product than to his parents, friends or a trusted human being. What does that say about our society, who we are and the relationships between us?

This is social isolation at its most acute — where the superficial has consumed the real, where our greatest pains and desires get typed into an illuminated screen and our deepest, darkest secrets are no longer whispered in confidence to those we love, but offloaded into a machine designed to simulate understanding. That should unsettle us. That should shame us. If it doesn’t, then perhaps dystopia isn’t coming. It’s already here.

Opinion articles represent the views of their author(s), which are not necessarily those of The Dartmouth.

Trending