Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
December 7, 2025 | Latest Issue
The Dartmouth

Professors, companies and the College: Tensions in AI

As AI companies aggressively market their technologies to students, questions remain about the impact on cognitive development and critical thinking.

10-8-25-seanhughes-notetaking.jpg

This article is featured in the 2025 Homecoming Special Issue.

Dartmouth students are increasingly using large language models — such as OpenAI’s ChatGPT, Google Gemini and Anthropic’s Claude — to complete their writing assignments for them. These programs have the ability to generate endless amounts of text. New data shows they may also have the ability to erode users’ critical thinking skills.

MIT study raises concerns about ‘cognitive cost’ of using LLMs in educational context

“To learn to write critically is to learn to think critically, and that is the core value of a liberal arts education.” So reads the website of the College’s Writing Program, a required cornerstone for first-years.

While it is too early to draw firm conclusions on how artificial intelligence impacts cognition, initial studies provide cause for concern.

A study conducted last year by Nataliya Kosmyna at the Massachusetts Institute of Technology and seven co-authors was the first to look at the “cognitive cost” of using AI in writing an essay. The study took 50 students from universities around Boston and split them into three groups — one using ChatGPT, one using internet search and one using just their brains — to write SAT-style essays. 

Using a headset embedded with electrodes, the researchers measured the participants brain activity and found that the ChatGPT group showed less brain activity in areas associated with memory and creativity. That group also reported feeling less ownership over what they wrote, falling behind in their ability to quote from essays they wrote “just minutes prior,” the study’s authors write.

The LLM group also used more homogenous words and ideas than the other two groups. This, the study’s authors note, is due to AI’s propensity to take the average of everything that is out there. 

The authors note that “emerging research raises critical concerns about the cognitive implications of extensive LLM usage,” particularly their potential to “diminish critical thinking skills.” They note that while AI may “enhance accessibility and personalization of education,” it may also “inadvertently contribute to cognitive atrophy through excessive reliance on AI-driven solutions.”

Companies going all in on promoting AI to students

Technology companies are spending big on AI, including billions for the construction of AI data centers — already more has been committed than went into construction of the interstate highway system over four decades, according to the Wall Street Journal. More than 18 billion messages are sent to ChatGPT every week, OpenAI said in July.

OpenAI, valued at $500 billion on Oct. 2, claims 700 million weekly users for ChatGPT. Evidence suggests a significant portion of those users are students — for the past two years, ChatGPT’s global monthly web visitors have dipped noticeably during the summer months, when school is not in session. 

The company has marketed aggressively toward students — which it says are already its most common users.

The company offered free access to ChatGPT Plus during finals last spring, placing billboards in American cities with marketing slogans themed around encouraging students to use ChatGPT to help them prepare for finals. 

Google’s Gemini, meanwhile, has been actively promoted to Dartmouth students in particular. Earlier this term, Gemini partnered with Handshake — which the College calls its “one-stop shop for all things career related” — for a pop-up on Massachusetts Row offering free ice cream and a free year of Google AI Pro to students who downloaded Handshake.

College ‘embracing AI’ while ‘focusing on what it means to be human’

In an essay in The Atlantic last month, College President Sian Leah Beilock described herself as a “tech optimist,” arguing in favor of teaching students to “use new tools wisely.”

“We are embracing AI, but only because we are simultaneously embracing what we are exceptionally prepared to do in our college environment: focusing on what it means to be human,” Beilock wrote.

Last spring saw a pilot program of “AI literacy content” in select first-year seminar courses. The guidelines on the website do not explicitly ban its use but ask students to “understand the drawbacks” and recognize when using AI is “not appropriate for a writing or research task” and to properly document AI if used. 

The College also provides “free access to ChatGPT, Claude and locally-hosted open large language models,” according to the College’s main Generative AI webpage.

On Aug. 20, the VOX Daily, the College’s official newsletter, contained a promotion for a new free tool: GPT-5 would now be available on chat.dartmouth.edu — a chatbot “offered to the Dartmouth community” with 2,000 credits a day free — for all Dartmouth students, faculty and staff.

The College touted one potential use: “draft, edit and refine papers or presentations with more precision.” 

This raised eyebrows among some faculty, who question whether the College is doing enough to counter the potential for students to cheat using Dartmouth-sponsored tools.

History professor Bethany Moreton was spurred by the VOX Daily message to write to administrators. The email, shared with The Dartmouth, expressed Moreton’s concerns about the College’s approach to AI.

“Many of us read that item as encouraging students to skip the tasks that develop their ability to think by writing,” Moreton wrote. “What is the comparable subsidized tool their instructors are being offered to meet this industrial-strength inducement to cheat?”

Moreton urged the College to seek more faculty input on how they can be “resourced and supported to teach thinking to the next generation” in the face of a “multi-billion dollar industry dedicated to undermining that practice.”

“GenAI’s aggressive marketing to college students is already eroding their abilities to read, write and think clearly,” Moreton wrote. “The latest evidence on how student use of AI is affecting cognition and mental health raises serious concerns, as does the opaque for-profit uses of data gathered from ChatGPT’s users.”  

‘An experiment’ with AI-proof writing assignments gains traction

In response to the prospect of AI essays, some professors have stopped assigning traditional papers altogether, calling for their substitution with in-class bluebook exams.  

Sales of bluebooks are booming at universities across the country. At the Cal Student Store at the University of California, Berkeley, for example, sales are up 80% over the last two academic years, the Wall Street Journal recently reported. 

Moreton has switched to bluebook exams for her class this fall. Another professor using bluebook exams this year is Matthew Ritger ’10, who teaches a course on Shakespeare. 

For inspiration on bluebook questions, Ritger turned to the archives at Rauner for Shakespeare exams from decades past.

“It’s an experiment,” Ritger said. 

Ritiger is trying to use the exam as a way to incentivize pre-exam activities. For one, the active reading of physical books. 

Requiring reading physical books as opposed to digital copies is “in keeping with what both common sense and science tells us,” Ritger said.

As for the bluebook exam itself, Ritger said that “just spending two hours with a pencil and paper is a mind-body experience that will produce positive results.”


Kent Friel

Kent Friel ‘26 is an executive editor at The Dartmouth. 

Trending