In the past week, Dartmouth announced the development of an app called Evergreen, a chatbot meant to, in the words of the College, “help students flourish by providing personalized guidance and support in real time.” The bot will be designed by a team of 130 Dartmouth students who will put in a cumulative 100,000 hours to refine the bot. By the end of its development, Evergreen will be able to “speak like a Dartmouth student,” understanding campus slang and providing one-on-one counseling in moments of need.
Dartmouth’s announcement claims that Evergreen is the first of its kind, but I don’t really buy it. There are already countless AI models that are billed as therapists, companions and as a panacea to mental health problems. For these preexisting tools, scandals and tragedies abound. Take the multiple reports of chatbots telling young people to kill themselves, and the story of Adam Raine, a 16 year old who confided in ChatGPT about his suicidal ideation and got advice on tying a proper noose.
Although I’m not a data engineer, it seems remarkably risky to put the mental health of thousands of students in the hands of a technology that’s younger than the people it’s meant to protect. You can institute all sorts of protections and guardrails, test it a thousand times over, but the bottom line is that Dartmouth students are smart. If they want to use Evergreen in a concerning or harmful way, they will find backdoors and exploits, inevitably leading to unsafe situations. Is a team of students going to be as good or better than a professional group of developers at finding these exploits and plugging them? I doubt it.
More broadly, we have no idea what artificial intelligence is developing towards. There are already legitimate philosophical debates about whether it’s conscious, and a group of leading AI experts published a paper declaring that artificial intelligence may very well end the world. I’m not a Luddite, nor am I being some alarmist about artificial intelligence. In fact, I use ChatGPT all the time. Just not for therapy. Trusting AI with something as important and complicated as the minds and mental health of our fellow students is simply daft.
There is also a clear concern beyond AI’s botched therapy sessions. There are countless reports of human beings developing deep, long-term and emotional bonds with artificial intelligence. These cases are just more examples in the growing body of evidence that technology is isolating us from each other. It’s easy to imagine a depressed or lonely student turning to Evergreen instead of reaching out to a fellow student, counselor or human being, and that to me is sad and terrifying.
The concern that people might develop personal relationships with Evergreen is bolstered by the claim that it will talk and act like a Dartmouth student. It would make more sense to make an AI chatbot that is there to support a student like a regular therapist might, but why does my therapist have to know what “Foco” means? The value proposition of this new technology clearly goes beyond mental health help. It’s something you might talk to about your campus problems, or ask for study advice. In other words, conversations one might have with a friend. Outsourcing even more of our human interaction to robots in this way will only serve to make us more sad, disaffected and alienated from our peers and the real world.
Finally, there is the major data privacy concern. The announcement of Evergreen explained that it would be “context-aware” and learn from each interaction it has with a user. So, if a student tells Evergreen a piece of close personal information about them, will Dartmouth now have access to this information as well? The announcement claims that it will be stored and secured on internal servers. I don’t know about you, but I’ve heard that about 100 times in my life, and the last thing I need are personal conversations I have with a supposed “therapist” leaked in a data breach.
Dartmouth is the most personal place in the world. That’s the very reason why so many people love it. Evergreen is another way this administration continues to water down community values. A true Dartmouth president doesn’t treat their own protesting students as strange thugs, doesn’t tirelessly chase media attention and doesn’t tell their students to turn to a robot if they feel lonely or sad. Actions like these are completely antithetical to the caring that makes this place singular.
Opinion articles represent the views of their author(s), which are not necessarily those of The Dartmouth.
Eli Moyse ’27 is an opinion editor and columnist for The Dartmouth. He studies government and creative writing. He publishes various personal work under a pen name on Substack (https://substack.com/@wesmercer), and you can find his other work in various publications.



