Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
March 10, 2026
The Dartmouth

Killilea: Anthropic is Too Close to War Crimes

Last year, Dartmouth signed an agreement with Anthropic. This year, Anthropic may have been used to bomb a school.

Last December, Dartmouth announced an institution-wide partnership with the artificial intelligence company Anthropic. While Dartmouth’s agreement with Anthropic has been under scrutiny by students and several faculty members over copyright infringement, a more pressing concern is Anthropic’s relationship with the Pentagon.  

Although the chief executive of Anthropic, Dario Amodei, recently rejected the use of its AI tools in fully autonomous defense systems, its primary model, Claude, still serves a key role in the Pentagon’s arsenal. More specifically, Claude has served as an integral part of Palantir’s Maven Smart System which provides the Department of Defense with real-time targeting recommendations in the ongoing conflict against Iran. 

Dartmouth’s relationship with Anthropic is cause for concern because it is directly supporting the movement towards an automated style of fighting, devoid of the necessary accountability that arises from human oversight 

On March 5, The New York Times reported, after a thorough investigation, that a Feb. 28 airstrike on the Shajarah Tayyebeh elementary school was the result of U.S. military activity. The strike killed at least 175, the majority being schoolchildren. This atrocity comes as the United States has begun to rapidly implement artificial intelligence tools into its fighting strategy. Although there has not been official confirmation that Claude was involved in targeting for this strike, the Wall Street Journal reported that Claude was involved in the 1,000 strikes at the beginning of the U.S. military campaign in Iran, the same timeline in which this strike occurred. 

The use of AI-assisted targeting in Iran does not, however, mark the first use of these technologies. In its conflict in Gaza, Israel heavily relied on “Lavender,” an AI-powered target identification software that analyzed surveillance data to score a Palestinian’s likelihood of being a Hamas militant. The Israel Defense Forces continued to use this technology despite a false positive rate of 10 percent, which increased harm incurred by civilians in Gaza. 

In response to a reporter’s question regarding U.S. involvement in the strike, press secretary Karoline Leavitt claimed that the Pentagon was investigating the matter and that “the United States of America does not target civilians, unlike the rogue Iranian regime that targets civilians, that kills children.” 

While the Department of Defense Law of War Manual does include policies that explicitly prohibit the targeting of civilians, enforcement of this policy and efforts to protect civilians in conflicts have been historically underwhelming. For example, in October 2015 a United States AC-130 gunship launched 210 shells towards the Kunduz Hospital in northern Afghanistan, killing at least 42 unarmed civilians. After investigating the incident, the Pentagon concluded that the strikes did not represent a war crime because they originated from unintentional human error and equipment. The Pentagon did announce formal disciplinary action for 16 servicemembers involved in the strikes, but nothing beyond removing them from their commands. This lack of accountability in a human-dominated era of conflict is more troubling considering the movement towards embracing AI tools in combat.

The advent of AI-assisted targeting technologies brings with it a new era of unaccountability. Unlike in the case of the Kunduz Hospital, the human role in the Shajarah Tayyebeh strike involved limited human intervention, likely just to approve the action. This comes amidst pressure from the Department of Defense to push for more autonomous, less accountable weapons systems. In response to Amodei’s refusal to allow Claude to be used without ethical guardrails, the Department of Defense threatened to invoke the rarely used Defense Production Act to compel Anthropic to collaborate on the department’s terms. Although Anthropic is suing the Department of Defense, their case focuses only on the Department’s designation of the company as a national security risk and does not specifically address the use of Claude in targeting. 

This threat reflects Secretary of Defense Pete Hegseth’s claims that AI military systems will operate “without ideological constraints that limit lawful military applications.” It remains unclear if the secretary believes that ethical concerns are an ideological constraint of AI military systems — however, it seems doubtful that they will be considered with any level of importance.

Yet, despite the consequences of AI-weaponized warfare, Dartmouth has continued to associate with Anthropic. Anthropic’s partnership with the College provides students and staff with a complete ecosystem of tools, such as its large language model Claude for Education. There is no sign of this partnership slowing down despite student and faculty criticism. 

The Department of Defense’s usage of Anthropic’s models places Dartmouth students one degree too close to the war crimes taking place in Iran. While it could be argued that this is only a tangential relationship, the same model that students will use to edit essays, clarify homework problems and summarize texts was likely responsible for a strike which sent 175 civilians to the grave. 

While it is undeniable that AI will continue to play a prominent role in academic as well as professional spaces, the direct link between Anthropic and mass civilian casualties should encourage careful consideration of Dartmouth’s connection to companies complicit in war crimes. This connection should not be taken lightly. 

Connor Killilea is a member of the Class of 2026. Guest columns represent the views of their author(s), which are not necessarily those of The Dartmouth.