Dartmouth has moved quickly to respond to increasingly widespread use of artificial intelligence. The College has issued guidance on the use of generative AI in coursework, created faculty teaching resources and made approved AI tools available to students, faculty and staff. That deserves credit. But when it comes to actual coursework, Dartmouth has largely left faculty to determine for their own classes the rules of what students can use, when and how. The College’s current guidance states that “the instructor’s GenAI policy defines the expectations” for a course.
That approach had a logic to it early on, when colleges were responding quickly to a new technology and trying to preserve faculty discretion. But it has since worn thin as AI tools have become a regular part of academic life and course expectations remain highly variable. Dartmouth’s own teaching resources now provide guidance for instructors on incorporating generative AI into courses.
The current policy gives instructors authority to set AI rules for their courses and requires students to disclose AI use in accordance with course-specific policies and the Academic Honor Policy. Both principles are sound. A computer science course and a writing seminar should not operate under identical rules. The problem is not the principle of faculty discretion. The problem is what happens when students face different standards from one course to the next.
Right now, the rules can change every term. Can a student use AI to outline a paper? To fix grammar? To talk through an argument before writing it? Ask around and students may hear different answers from different professors, or none at all. That ambiguity falls hardest on students who actually try to follow the rules, while doing little to stop those who do not.
It burdens faculty unnecessarily as well. Professors should not have to spend office hours adjudicating whether a student’s outline relied too heavily on ChatGPT or whether editing crossed into authorship. That is time and energy pulled away from teaching.
Dartmouth does not need to ban AI, nor should it outsource oversight to individual syllabi. What it needs is a common framework: baseline categories that tell students where they stand before they open a document; assignments where AI is off-limits; assignments where limited help is permitted; assignments where use is allowed with disclosure. Within that structure, faculty would keep control of their classrooms, and students would have a consistent starting point instead of having to decode fine print every ten weeks.
The Academic Honor Policy already treats unauthorized AI use as misconduct. Yet what is unauthorized often turns on course-specific rules that differ in substance and clarity. Students cannot be held to a standard they cannot reliably identify.
Dartmouth has shown it can think carefully about AI. It should extend that thinking to the classroom — not with a ban, not with a free-for-all, but with expectations clear enough that students and faculty are not left guessing.
Dartmouth should not leave academic integrity to a maze of syllabus fine print.
Opinion articles represent the views of their author(s), which are not necessarily those of The Dartmouth.


