Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
December 5, 2025 | Latest Issue
The Dartmouth

Nelson: Dartmouth must do more to integrate AI into the classroom

We think of AI as mostly a cheating tool and an academic disturbance. We aren’t ready for what is coming.

Artificial intelligence is an issue that lingers quietly at the back of our minds, an unspoken discomfort that many of us carry. As college students, we have experienced the advent of artificial intelligence models and witnessed the breathtaking pace at which they have advanced. Some of us may have benefited from these large language models’ impressive talent for completing assignments. But beneath the convenience lies the growing anxiety that artificial intelligence will reshape societies and markets in ways that we do not yet understand. Dartmouth must more proactively integrate AI into the classroom. 

Some predict mass unemployment, expecting corporations to replace costly Dartmouth-educated workers with inexpensive AI alternatives. Just last week, the CEO of Anthropic, a leading AI firm, claimed that unemployment could reach 20% within the next few years as companies automate workflows. In a recent World Economic Forum report, 41% of employers surveyed planned to downsize their workforce because they could use AI instead. 

Others are more optimistic. Historical precedent suggests that job growth from technological advancement tends to outweigh the displaced jobs. Another WEF report forecasts that there will be a positive trend in employment, with 78 million new total jobs by 2030 as a result of AI. For Dartmouth students navigating a complex job market, the challenge is clear and the decisions are vast. 

The growing uncertainty is not just theoretical — it already impacts how we learn. A report by the Digital Education Council found that 86% percent of college students use AI in their studies, with the majority using it frequently. One place where I saw this trend firsthand was in the course ECON 28: “Public Finance and Public Policy,” taught by Professor Erzo Luttmer. A portion of the course is centered around a series of short essays that require specific economic reasoning skills. On these assignments, about half of students reported using AI — which was permitted — to help with the content of their essays. Each week, to supplement grades given by the professor, these essays were anonymized and peer graded by several classmates. The teacher’s assistants also submitted several fully AI-generated essays into the grading pool to keep us on our toes.

On average, the AI generated essays in my grading pool — seven essays out of 35 that I graded — performed 3.5 points above the mean student grade. The AI essays performed best relative to students on the few essays that were written in class, when students did not have access to AI tools and had to write under a time constraint. Adjusting for these instances, the seven AI-generated essays beat the student mean by one point, suggesting that today’s LLMs are roughly as good at complex applied economic reasoning as Dartmouth economics undergraduates. 

This is not to say that these models are brilliant. In most cases, their arguments were clear and accurate, but mechanical and easy to identify. Still, remembering the primitive and antiquated — yet remarkable at the time — LLMs that many rising freshmen, sophomores and juniors toyed with as high schoolers, the speed with which these tools have progressed is staggering. We can expect this pace to continue. As ChatGPT told me as I was writing this piece, “sure, I can’t write your econ paper yet … but give me two years.” 

Academic communities like ours largely think of AI as a cheating tool and an anathema to academic integrity. We ask what AI can do for us — write this essay, code that program — but we do not ask if we will be prepared for the world that we are going to graduate into. AI will soon become much more than an optional tool; it will become a fundamental feature of the modern world. Are we doing enough to adapt to the rapid changes around us? 

Dartmouth has adapted before to technological advancements, and it must again. We developed the blitz email system in 1987, and began to require every student to own a computer in 1991. These weren't just conveniences, they helped prepare students for the reality of the society that they would soon join. It is time that Dartmouth faces reality and takes AI seriously. Doing so requires expanding our liberal arts mission to include an education in AI. This means treating AI not merely as a professional tool, but as a critical piece of societal infrastructure. More courses should be designed like ECON 28, where AI is permitted and maybe even encouraged to aid in our learning. Departments should develop courses that investigate the roles that AI may play in their disciplines, teaching students to work alongside these tools to expand their impact. AI literacy should be promoted across all departments. 

At the very least, we need to start a conversation — one that will begin a movement towards engagement and advancement, preparing Dartmouth students not just to succeed in our uncertain future, but to lead in shaping it. 

Guest articles represent the views of their author(s), which are not necessarily those of The Dartmouth.

Trending