Back in October 2025, I wrote a column about Evergreen AI in which I expressed concerns about student safety and the ability of an artificial intelligence to provide emotional support to Dartmouth students. When I wrote the column, I hadn’t even considered the labor implications of training Evergreen, and it seems that those in charge of the program have not either.
Artificial intelligence operates on a vast, invisible network of human labor. You might think of the data engineers at Anthropic and OpenAI, but they are just the surface. There are literally millions of people across the world currently working in data labeling and annotation. They are tasked with looking at images and labeling different things to create valuable training data for AI. You’ve almost definitely done this unconsciously yourself – Google has used its reCAPTCHA anti-bot protection service to unwittingly collect millions of hours of our human labor. They’ve used your identification of letters, sidewalks and motorcycles to hone the accuracy of Google Maps and Google Books.
These full-time data labelers – who essentially do reCAPTCHA as a full-time job – often face terrible working conditions. In Africa and Southeast Asia, laborers frequently work up to 20 hours a day in so-called “digital sweatshops.” They sift through graphic, offensive content, including videos depicting murders, extreme violence and sex abuse — work that has led to reported PTSD. This is all in the name of protecting consumers of AI products from the same harmful content. These jobs sound miserable and horrifying, and they are the latest addition to a long history of developed market economies outsourcing undesirable labor to the Global South.
Evergreen employs students to write 100-plus message fake dialogues between a hypothetical Dartmouth student and Evergreen AI. The students who do this are higher-paid and better-treated data laborers — both gig workers in the Global South and Ivy League students are providing high-quality data to train the machine. In fact, there’s a kind of chain-of-command system operating in the development of AI. There are multinational tech conglomerates and investors at the top, then data producers in America who are treated better because of legal obligations and protections and then data producers in the global south at the bottom. The data producers in the middle use AI to train their AI models, and the AI they’re using was trained by people at the bottom who had no other option but to train it themselves. Now they have PTSD.
I tried to put myself in the shoes of someone sitting down to write one of these dialogues, and it seemed extremely hard. That’s why it makes sense that employees tasked with this job would use AI to complete the task, in violation of Evergreen’s policies. However, data scientists warn us that there’s a major problem with this approach. If an AI is trained on entirely synthetic data, it will eventually produce meaningless outputs, ceasing to operate. This phenomenon is called “model collapse.” There is an increasing concern that as more and more of the world is produced by AI and we run out of original content, model collapse will be harder and harder to avoid.
As more and more of our college’s resources and focus are dedicated towards AI, I worry that we may undergo our own collapse. I worry we are already losing sight of learning.
In a recent column, Rohan Taneja ’28 argued that if AI can do something, it should. He cited a young man with no background in math solving Erdős Problem #1196 with nothing but ChatGPT Pro and a single prompt. He went on to say that a skill worth building now is “judgment to evaluate what AI produces, know when it is wrong and push further when it is not enough.” It’s hard to argue with the example that he cited, but I’m still left feeling empty. Is this really our future? Using human judgment to find problems solvable by AI, and tailoring special instructions to a machine that does the rest of the work? Am I totally irrational to feel deeply wrong about that implication, and for wanting to stick by human-centered approaches to problems, even if it means less efficiency?
I still remember the first time I felt like I really learned something about life and the world. It was at Dartmouth while reading “Capital Volume I” by Karl Marx. That book was nearly the death of me. So much of it was dreadfully dull and confusing. But I still trudged through section after section, and with help from in-class discussions, I began to truly understand it. It made me think about the world’s operation in a way that I never had before, and that fundamentally enriched my life. This sensation is the entire point of college. Really learning like this makes you a better and fuller person. It cannot be gotten from an elaborate prompt or the production of fake conversations to train a machine. It isn’t efficient, can’t be optimized and frequently isn’t fun.
The everlasting quest for good data will only accelerate. The creation or collection of it might even be your job, like it is for the dialogue writers at Evergreen. I just hope that we don’t forget to learn, not just about prompts, and not just in ways that ‘prepare us for an AI world.’ If we go down that road, Dartmouth will experience its own kind of model collapse.
Opinion articles represent the views of their author(s), which are not necessarily those of The Dartmouth.
Eli Moyse ’27 is an opinion editor and columnist for The Dartmouth. He studies government and creative writing. He publishes various personal work under a pen name on Substack (https://substack.com/@wesmercer), and you can find his other work in various publications.

