Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
January 27, 2026 | Latest Issue
The Dartmouth

Rempe-Hiam: Ditch Anthropic, Ditch Palantir

Dartmouth’s partnership with Anthropic is closer to Palantir than we should be comfortable with.

Dartmouth recently announced its partnership with Anthropic, an artificial intelligence company known for its large language model “Claude,” and more troublingly, its budding relationship with Palantir.

The announcement comes as part of President Sian Leah Beilock and Provost Santiago Schnell’s campaign to make Dartmouth a leader of AI integration in higher education. While I take issue with the integration of AI into our academic experience as a whole, the College’s choice to collaborate with Anthropic specifically — a company that closely partners with Palantir to provide AI models to U.S. defense agencies  — is an existential threat to Dartmouth’s ethical values and the privacy of our students.

In 2024, Anthropic announced that Claude would be available for U.S. intelligence and defense agencies to use in Palantir’s AI operations platform, with the aim of streamlining the processing and analyses of large amounts of data. Suddenly, Anthropic, a company founded with “safety at the frontier” as their mantra, refashioned their terms of service, allowing Claude to be used for “legally authorized foreign intelligence analysis,” “identifying covert influence or sabotage campaigns,” and “providing warning in advance of potential military activities.” Since 2024, new contracts with Palantir and the DoD have cemented this new direction for Anthropic: war and surveillance.

Dartmouth’s Anthropic partnership threatens to compromise student safety. Although the Beilock administration has announced “strict privacy standards” under the terms of the Anthropic partnership, they have not announced the much-needed specifics of our privacy guarantees. The information we do have, however, suggests Anthropic will be getting access to private College data. As Tuck School of Business marketing professor Praveen Kopalle told a reporter for The Dartmouth earlier this month, “By partnering with Dartmouth, Anthropic will get access to real time, actual and non-simulated data in the education space.” 

This access begs the question: under Dartmouth’s new partnership with Anthropic, what specific guarantees do we have that our student activists are safe? What guarantees do we have that Anthropic won’t change their terms of service again in pursuit of more profit? What guarantees do we have that Anthropic won’t share our data with Palantir, who, through Operation Catch and Revoke, works with the Trump administration to identify students who express views critical of Israel on social media? Two years ago, 90 community members were arrested during a pro-Palestinian protest — will defense contractors have access to the data of these students? What about the data of international students, many of whom are already afraid to speak up about politics? Hanover has recently rejected sanctuary city status, so if Dartmouth decides to give away our data to Anthropic, our students may be in real danger. 

Although the current agreements between the two companies don’t indicate the two companies are sharing data, with the direction Anthropic is heading, a future collaboration isn’t off the table. Even if there are privacy agreements in place, these large companies are notoriously bad at following the law — the ongoing lawsuits by Dartmouth professors against Anthropic for plagiarizing their research are great examples of this. 

By nature of working in defense, Anthropic and Palantir’s work is highly classified, so much is up to speculation. Some may consider my concerns for privacy paranoid, but given the steadily encroaching authoritarianism of the Trump administration, we should be keeping our distance from companies like Anthropic and Palantir. The amount of knowledge we lack about the company we’re partnering with is reason enough for concern — yet even what we do know is enough to tell us we do not want Anthropic on campus.  

Anthropic’s connections to Palantir — a company complicit in the genocide of Palestinians — shed light on just how out-of-step Anthropic is with our institutional values. For example, Palantir’s Gotham technology — a “decision-making platform for AI-enabled operations” — has helped Israel wage its genocide in Gaza for years. Through “tactical command and control, visual intelligence, forensics, readiness and production,” Palantir’s technology has played a key role in enabling International-Criminal-Court-wanted Netanyahu to claim the lives of at least 64,000 children and 1,000 babies over the past two years. Any company that would partner with an organization like Palantir clearly does not share the values that we as a university stand for. With the direction Anthropic is moving in, it isn’t a stretch of the imagination to picture them getting in on genocide, too. 

Dartmouth’s Anthropic partnership is unethical and the implication for our security and freedom of speech are daunting. If Anthropic continues on this trajectory of militancy, there’s no telling what this partnership might mean for our student activists and international students. 

I’m disgusted that Dartmouth chose to work with a corporation that is aligned with the on-going genocide in Gaza, the mass deportations of undocumented Americans and the daily surveillance of American citizens. President Beilock, ditch Anthropic. Ditch Palantir.

Opinion articles represent the views of their author(s), which are not necessarily those of The Dartmouth.