This article is featured in the 2025 Homecoming Special Issue
The College’s current artificial intelligence policies largely grant professors the discretion to set the rules in their classrooms.
The primary policy for undergraduate students on academic AI use is the Office of the Provost’s Guidelines On Using Generative Artificial Intelligence For Coursework. This statement says that students cannot use generative AI for classwork unless “explicitly” told so by their professor, according to special advisor to the provost James Dobson.
“It provided a statement with a sort of blanket,” Dobson said.
However, Dobson said that the policy is updated and fleshed out as the technology develops. For example, he said that he made a “revision” to this policy this fall to “clarify” that students can use AI “as a study aid.”
Further, Dobson also revised the policy this fall to state that students should normally disclose, rather than cite, their use of AI in coursework.
“Generative AI is stochastic,” he said. “It’s random. It’s going to generate slightly different outputs every time. So let’s not equate generative AI with a citational reference.”
Dobson emphasized that this policy aims to respect instructor autonomy.
“We leave everything up to the instructor,” he said. “Even in different classes, you might allow people to use generative AI for different things in different ways.”
On a departmental level, many of Dartmouth’s undergraduate departments support professors’ discretion by implementing minimal, if any, policies on AI use.
In an email to The Dartmouth, government department chair Lucas Swaine wrote that he asked faculty to include a “clear, written AI policy on each of their syllabi.”
“Faculty in our department have [the] prerogative to decide what sort of AI policy they would like to employ,” Swaine wrote. “I have not promoted or suggested any view on whether, or to what extent, they should allow AI use in their courses. What is important is for faculty to lay out what is and what is not permitted in the courses that they instruct.”
As director of the Writing Program, Dobson has implemented an AI literacy component in first-year seminars to facilitate students' understanding of AI’s benefits and downsides. The policy asks professors to brainstorm their own plans to educate about AI in their seminars.
“I’m asking every instructor of the first-year seminars this year, starting in the winter term, to add some kind of AI literacy components to their courses and give a big, long list of potential activities that might be [used],” Dobson said.
In interviews with The Dartmouth, professors said that they take advantage of the general flexibility in AI policies in a myriad of ways. For example, engineering professor Rafe Steinhaur recently implemented a creative project on AI in his ENGS 12: “Design Thinking” class.
“Our final project is always some broad topic that is using human-centered design to improve undergraduate student experience at Dartmouth,” Steinhaur said. “This past winter, we did AI.”
“One of my favorite projects was a team that… imagined a student who got a concussion could email the professor, and they mocked up a toggle button for the professor to click on for the student, and AI would then send a summary of lectures and what happened in class,” he said. And it would enable all these features for a student to stay up to date for those couple weeks, and then the professor could toggle it off.”
This past spring, computer science professor Nikhil Singh had his students in his class COSC 89.34: “Human-Centered Generative AI: Foundations and Applications” work with AI to reflect on its uses.
“I asked them to interact with a generative AI system of their choosing, and then use either the conceptual frameworks or the technical knowledge that was embedded in prior weeks’ readings and discussions and lectures,” he said.
Engineering professor Petra Bonfert-Taylor pointed out that AI has the potential to degrade academic integrity.
“Some classes have very strict AI policies, and then some students adhere to those, and then some students don’t,” Bonfert-Taylor said. “That’s to the detriment of those who do adhere to the policies — because, all of a sudden, students can hand in these perfect solutions to assignments because they use AI.”
She said that the College should look to teaching strategies that circumvent this problem.
“We’re working on creating, brainstorming ideas that really focus on the learning,” she said.
Steinhaur emphasized the need for a bottom-up approach to modifying AI policies at Dartmouth.
“There probably aren’t going to be policies that dramatically remove that discretion from professors in the next couple of years,” he said. “The best approach to thinking about AI policy is going to be, how do we engage our colleagues in these discussions?”



