Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
January 13, 2026 | Latest Issue
The Dartmouth

College announces AI partnership with Anthropic, company accused of plagiarizing Dartmouth professors’ publications

The partnership with technology companies Anthropic and Amazon Web Services, announced last month, has prompted backlash from professors involved in a now-settled class action lawsuit against Anthropic for plagiarism.

 

 

Last December, the College announced a partnership with Anthropic and Amazon Web Services, making Dartmouth the first Ivy League university to launch artificial intelligence at an institutional scale. The Dec. 3 announcement has drawn criticism from some faculty members, including claimants in a class action lawsuit against Anthropic for allegedly infringing their copyrights and unethically downloading their publications to train its large-language model Claude.

More than 130 College of Arts and Sciences faculty members have publications named in the lawsuit, including College President Sian Leah Beilock. On Sep. 5, Anthropic agreed to a settlement of $3,000 per work per author, for a landmark total of $1.5 billion, the largest copyright settlement in U.S. history. The final approval hearing is scheduled for April. The College declined to comment on Beilock’s eligibility for the settlement class.

Latin American, Latino and Caribbean studies professor Matthew Garcia, a claimant in the suit, said administrators “have not been forthright” about the terms of the new AI partnership with affected faculty members. 

“They have not honored shared governance, that the College is managed by the administration and the faculty together,” Garcia said.

In response, associate vice president for communications Kathryn Kennedy wrote in an email statement to The Dartmouth that the College announced the partnership at a faculty meeting in the fall.

“The partnership with Anthropic and AWS was first announced by Provost Santiago Schnell during the Nov. 10 general faculty meeting, during which multiple faculty in attendance voiced their perspectives in response,” Kennedy wrote. 

The partnership will provide students with access to Anthropic’s Claude for Education, a large language model tailored to help students “draft” papers and “work through” homework problems, according to Anthropic’s website. 

Computer science professor Sarah Preum said she believes that it is important to ensure students and faculty are “AI-ready and AI-natured.” 

“Me and other professors in the computer science department try to make students aware of the opportunities of AI and also potential blind spots AI might have,” Preum said. “I asked [students] to solve the same problem, but by prompting an LLM, and then students are asked to compare the performance of this model [with their code].”

Preum added that AI can be used to optimize learning materials to “certain aspects of students’ learning and their preferences.”

Through the partnership, the College itself will gain access to AWS’s machine-learning platform Bedrock, a service for creating AI-powered software. The College declined to comment on what types of applications it would create using Bedrock.

Dartmouth will also gain access to AWS’s Skills to Jobs Tech Alliance, which is used by collaborating employers — including Bank of America, Deloitte and T-Mobile — to source talent in cloud support, software development and data integration. In a press release about the partnership, AWS wrote that their Digital Innovation Team will work directly with Dartmouth to increase “efficiency” in campus operations with institution-specific AI resources. 

The partnership offers Dartmouth and Anthropic the opportunity to position themselves as “leaders and innovators in the generative AI space,” Tuck School of Business marketing professor Praveen Kopalle wrote in an email statement to The Dartmouth. 

“The solutions an AI platform recommends are only as good as the data used to train the model,” Kopalle wrote. “... Anthropic needs to differentiate itself and one point of differentiation is to focus on the education segment. By partnering with Dartmouth, Anthropic will get access to real time, actual and non-simulated data in the education space so that it can offer much better recommendations and solutions relative to its counterparts.”

Kopalle added that AWS’s services offer Dartmouth and Anthropic resources to store the data that they will collect. 

“By partnering with two other world class organizations, AWS has only upside to look forward to,” Kopalle wrote.

Anthropic already has agreements to provide Claude for Education to students of Northeastern University, the London School of Economics and Champlain College.

History professor Bethany Moreton, a claimant in the class action suit, said she thought the College’s decision to partner with Anthropic, a company that “ripped off the published scholarly work of [the College’s] own faculty,” was “ironic.”

“We are faced with an agreement that has been struck beyond our ability to even know about it, let alone have any impact on it, and we’re scrambling to deal with the fallout at the level of classrooms,” Moreton said.

AI “interrupts the slow, effortful and inefficient process of learning how to think,” Moreton added.

The College convened a Faculty Leadership Group on AI in the fall, according to Kennedy. The group will “define a principled, evidence-based strategy for where AI can meaningfully accelerate Dartmouth’s mission, and where deliberate restraint is essential to safeguard our values and pedagogy,” she wrote. Members of the group did not respond to multiple requests for comment. 

Art history professor Mary Coffey, another claimant in the class action suit, said her understanding “from members of that committee” is that the College brought the announcement to the Faculty Leadership Group “very late in the process, after the decision had already been made.”

“The only input that committee had in the decision was they helped the College shape the message,” Coffey said. 

Members of the Faculty Leadership Group on AI did not respond to requests for comment.

In an interview with The Dartmouth, Schnell said that he conducted a survey on faculty opinions on AI last May, before the partnership began “moving” in earnest behind the scenes in July.

About 60 percent of respondents to the survey said it was important for students to learn how to use generative AI effectively “in the next five years,” according to a report from the office of the Provost. Another 20 to 25 percent were “cautious in their approach but open to finding ways to use GenAI.”

A majority of respondents shared that they were “highly concerned” about the possibility of generative AI inhibiting critical thinking and deep understanding, promoting misinformation, replacing close reading and undermining academic integrity and writing, according to the report.

Schnell said the AWS software acquired through the software will allow the College to lead change on the “ethical use of AI” through “guidelines” on use of AI by students and faculty.

“The time has come for us,” Schnell said. “We were the place where AI was born and the place where AI can be reborn for the future of education and research.”

Over the past year, the College has entered the AI space with ventures including Evergreen — the world’s first campus wellness AI — and pilot programs of AI literacy content in first-year seminar courses. Some professors are also experimenting with AI-proof writing assignments, while others are embracing AI’s role in the classroom. 

Government professor Sean Westwood, who conducts research with and about AI, said he believes the partnership is a step in the right direction to prepare Dartmouth students for their future careers.

“We could certainly try to pretend that AI is not going to be a requirement for future jobs,” Westwood said. “But I think that would be irresponsible.” 

He added that AI can create a more “equal playing field” in education and that schools like Dartmouth must lead the charge to teach students how to use AI ethically, with “principle” and “safety” in mind. 

The syllabus for one of Westwood’s courses this term, Quantitative Political Analysis, includes learning how to use Claude as a software programming assistant.

English professor Aden Evens, who was also in the settlement class for the Anthropic lawsuit, wrote in an email statement to The Dartmouth that “something is deeply wrong when the company that steals without acknowledgement the writing of my colleagues and myself is the same company that the Dartmouth administration hires as a ‘partner.’”

“On the other hand, I do not simply oppose generative artificial intelligence and I expect that the very notion of intellectual property (and related issues like copyright) will shift dramatically as GenAI continues to grow,” Evans added. “The uncredited use of an author’s book to train an AI may have a very different resonance a few years from now.”

Art history professor Katie Hornstein, another member of the settlement class, wrote that the fact “that members of the faculty are part of the class-action lawsuit against Anthropic because their intellectual property was violated in an act-now, litigate later technological arms race … should have given our administration pause before signing a ‘non-exclusive’ deal with the company.”

“Doesn’t their violation of our intellectual property mirror the kind of dishonest practices that our academic honor code forbids?” Hornstein wrote. “I think it sends a mixed message to both faculty and students.” 

Government professor Stephen Brooks, another member of the settlement class, wrote that while he’s “glad Anthropic is being forced to at least pay a fine, [his] sense is that the amount was way too small.” 

“It’s a big problem that firms that misuse intellectual property face so little consequences for their actions,” Brooks wrote. “The same is true for firms that do not properly protect personal information, as with the recent data breach of Oracle that affected tens of thousands of people at Dartmouth.  Until firms face serious consequences for their harmful actions, they will continue to cut corners.”

History professor Darrin McMahon, another member of the settlement class, wrote that he has “worry” about AI’s “consequences for teaching, learning, research and intellectual copyright.”

“I well understand that AI is not going away, and that colleges and universities need to stay out in front of its development,” McMahon wrote. “But I wish there could have been more input from the faculty about the deal with AWS and Anthropic.”

Going forward, Garcia said he would like for the College to be more transparent with faculty members, students and alumni about the College’s partnerships. 

“It’s not that [the College] shouldn’t have a relationship with the corporate world — it just needs to be very transparent,” Garcia said.

According to the class action lawsuit's public search engine for impacted works, Arts and Sciences professors whose publications are eligible for the settlement class include Susan Ackerman, Zahra Ayubi, Randall Balmer, Robert Baum, Sonu Bedi, Sian Beilock, Colleen Boggs, Stephen Brooks,  Kimberly Juanita Brown, Danielle Callegari, Colin Calloway, Clifford Campbell, Nancy Canepa, Patrick Cavanagh, Michael Chaney, Alexander Chee, William Cheng, Jonathan Chipman, Paul Christesen, Henry Clark, Donna Coch, Mary Coffey, Ayo Coly, Sienna Craig, Benoit Cushman-Roisin, George Cybenko, Matthew Delmont, Jeremy DeSilva, Carolyn Dever, James Dobson, Mona Domosh, James Dorsey, Laura Edmondson, Tarek El-Ariss, Chad Elias, Steven Ericson, Aden Evens, Ezzedine Fishere, Mary Flanagan, Carol Folt, Nancy Frankenberry, Susanne Freidberg, Andrew Friedland, Jeffrey Friedman, Feng Fu, Veronika Fuechtner, Cecilia Gaposchkin, Matthew Garcia, Gerd Gemunden, Levi Gibbs, Reighan Gillam, Marcelo Gleiser, Lewis Glinert, Margaret Graver, Udi Greenberg, Lev Grinberg, Brooke Harrington, Susannah Heschel, Lynn Higgins, Katie Hornstein, Doug Irwin, Sergei Kan, Trica Keaton, Jodi Kim, Allen Koop, Lawrence Kritzman, Ronald Lasky, Clara Lewis, Peter Lewis, Eng-Beng Lim, Stefan Link, Jason Lyall, Christopher MacEvitt, Vicki May, Janice McCabe, Darrin McMahon, Edward Miller, Jennifer Miller, Bethany Moreton, Russell Muirhead, Monica White Ndounou, Mimi Thi Nguyen, Brendan Nyhan, Laura Ogden, Reiko Ohnuma, Thomas O’Malley, Annelise Orleck, Graziella Parati, Geoffrey Parker, Misagh Parsa, Nina Pavcnik, Donald Pease, David Plunkett, Brian Pogue, Daryl Press, Tracy Punshon, Israel Reyes, Roopika Risam, Lindsay Robertson, Daniel Rockmore, Julie Rose, Jeffrey Ruoff, Andrew Samwick, Analola Santana, O. Sami Saydjari, Jeff Sharlet, Jesse Weaver Shipley, Devin Singh, Jonathan Skinner, Sean Smith, Jonathan Smolin, Christopher Sneddon, Christopher Snyder, Robert Staiger, Steve Swayne, James Tatum, Melanie Taylor, Stephen Taylor, Amie Thomasson, Peter Tse, Roger Ulrich, Benjamin Valentino, Tor Wager, Ao Wang, Dennis Washburn, D.G. Webster, Jacqueline Wernimont, Lindsay Whaley, Thalia Wheatley, Charles Wheelan, Peter Winkler, William Wohlforth and John Zhang.

Iris WeaverBell ’28 and Kelsey Wang ’27 contributed to reporting. 


Jackson Hyde

Jackson Hyde '28 is a news reporter. He is from Los Angeles, Calif., and is majoring in Government modified with Philosophy.