“Don’t bother using AI — I’ll catch it” is a sentence I’m sure you’ve heard from your professors at some point in high school and college. It’s bullshit.
The toupée fallacy states that we assume all toupées are bad because we only notice the bad ones. Likewise, professors assume all AI essays are bad because they are only able to catch the bad ones. When we turn to the numbers, however, it becomes clear: AI is wearing some pretty phenomenal toupées.
In 2023, researchers at the University of Reading in the United Kingdom conducted a study in which they created fake student profiles and submitted AI-generated work without the professors’ knowledge. They found that 97% of AI work submissions went undetected by professors. A simple solution seems to emerge: if professors are unable to detect AI, the College should mandate the use of AI detection tools. While that sounds like a solid approach, research suggests not only that AI detection tools don’t work, but that they actually do more harm than good.
OpenAI shut down its own AI detection tool in 2023 after a catastrophic launch. The company reported that 26% of submissions that used AI went undetected, calling their own tool “not fully reliable.” The U.S. Constitution, after researchers fed it through the AI detector “ZeroGPT” — which is not affiliated with OpenAI — showed up as 96.21% AI. Unless James Madison was working with a little more than feather and ink, it’s safe to say that AI detectors are simply unreliable; they produce both false negatives and false positives. The false positives — non-AI work showing up as AI — are especially brutal and carry serious consequences for completely innocent students.
These detectors are not simply misfiring; they’re discriminating against non-native English speakers and Black students. Researchers at Stanford University conducted a study in 2023 in which they fed 91 Test of English as a Foreign Language (TOEFL) essays — essays written by non-native English speakers — to AI detectors and found that 97% were flagged by at least one detector. Furthermore, in a 2024 report by Common Sense Media, 20% of Black teens were falsely accused of using AI, double the rate of their white and Latino counterparts. AI detectors, and in this case, the teachers’ selective and prejudicial use of them, are exacerbating existing discipline disparities. These students did not use AI, and yet if a professor abided by the detector’s findings, the students would have faced disciplinary action, or worse, expulsion.
That’s not to mention the frontier of AI humanizers, which tweak the language in AI writing to dodge detectors — an integral step that any AI-using college student would be careful not to miss. A study conducted by Times Higher Education showed that ChatGPT-generated essays with an initial 100% AI detection rate showed up as 0% AI after ChatGPT was asked to “write like a teenager.” AI detectors are flashlights without batteries looking for a needle-sized toupée in a haystack. We need a new approach.
The stakes here are high — in a B+ median class, my grade can boil down to whether or not the kid sitting next to me is using Claude, a software that doubles as a target identifier for the U.S. military and a “To Kill a Mockingbird” scholar. It’s unfair. What’s worse is that my classmate’s AI-generated essay will perform just as well as my human-written essay. A 2024 Oxford University study analyzed the grading of AI-generated and human-written essays and found that AI essays averaged a score of 60.46%, while human essays averaged 63.57%, a difference that was not statistically significant. The AI-using students also saved time and effort, which they could allocate to other courses and extracurriculars. The toupée looks good.
Statistics aside, any Dartmouth student can attest to AI’s toupée effect. I’ve watched Claude generate essays so strong that I questioned whether to ditch the opinion writing train while I still could. Out of curiosity, after finishing this article, I asked Claude to write this piece itself — I fear it was better.
Academics are allowed a certain hubris when grading and teaching courses — they’ve earned it, we expect it. What is unacceptable, however, is turning a blind eye to a real problem that is hurting honest, hardworking students. Dartmouth recently announced its partnership with Anthropic, which will give all students access to Claude and force professors to operate under the assumption that every student has access to state-of-the-art AI tools. Thus, professors must get creative and bring ingenuity to their toupée detection strategy.
First, professors need to see for themselves what these tools are capable of. Load up Claude.ai and start feeding it prompts. Analyze its production — what voice does AI adopt? AI notably uses words like “moreover” as well as the distinguished AI em dash — a trend that has given me a whole lot of trouble. Second, professors need to adjust their curriculum. If an online homework assignment can be done in class, pencil-to-paper, change the assignment. We now live in a world where the status quo is AI. If prevention is our goal, we must build a framework that makes using AI literally impossible. Most importantly, professors need to accept that AI is outpacing them. The numbers are clear: you aren’t catching the toupée, and neither are the detectors.
Some professors, especially in STEM, are indifferent to AI or even endorse it. In those cases, I urge professors and students to weigh the environmental and human sacrifices. Some AI data centers can use up to 5 million gallons of water per day, equivalent to the water supply of a town between 10,000 and 50,000 people. AI data centers also disproportionately hurt low-income Black communities, contributing to higher asthma rates and more pollution-related deaths. If Claude remains still too enticing an option for your next coding project, toupée away. But for the majority of professors, I’m willing to bet that you see AI as an academic, existential and moral threat. I urge you to eliminate the possibility for toupées altogether.
Opinion articles represent the views of their author(s), which are not necessarily those of The Dartmouth.



