Eli Moyse ’27 is right about one thing: Learning is inconvenient. What is even more inconvenient, though, is changing how we learn, and reckoning with what is still worth learning in the first place.
Moyse’s concern about a Dartmouth professor unintentionally uploading student work to Claude is legitimate. That was a mistake, and it should not have happened. However, the lessons he draws from it — that artificial intelligence is integrating into academia too fast, that we need to slow down and protect the hard inconvenience of reading boring books and grinding through problem sets — mistakes a single incident of poor judgment for a systemic argument. The professor used a tool badly, but that is not an indictment of the tool.
My position is blunter: If AI can do something, it should. Not reflexively, and not as a substitute for understanding, but as the default approach of a rational person trying to work well. The skill worth building is not the ability to perform tasks AI can do — it is the judgment to evaluate what AI produces, know when it is wrong and push further when it is not enough. That is a harder skill than problem set completion, and we are not teaching it.
This matters more now than it did even a month ago. Last April, a 23-year-old named Liam Price, with no advanced mathematics training and just a ChatGPT Pro subscription, entered Erdős Problem #1196 as a single prompt to GPT-5.4 Pro on an idle Monday afternoon and received a valid proof to a conjecture that had resisted mathematicians since 1968. The solution was not a retrieval of some obscure paper, but an application of the von Mangoldt weight approach, which is a method that simply had not been tried. UCLA mathematics professor Terence Tao, who reviewed the proof, called it a new way to think about the structure of large numbers. Price’s own account of what he understood showed the absurdity of the moment: “I don’t even know what this problem is. I just occasionally work on Erdős’ problems and throw them to the AI to see what results I can get.”
The obvious rejoinder is that Price learned nothing, but that misses the point. He identified a problem worth solving, got a result and had enough sense to get it verified. That is a workflow, and it is increasingly the one that matters. The alternative — spending four years building the technical background to attempt Problem #1196 from scratch — produces a different kind of person, and that person is not obviously more useful. Moyse wants to protect the inconvenience of learning as if the inconvenience were the point. Sometimes it is. Usually it is just an inconvenience.
Moyse’s sharpest claim is that Dartmouth’s purpose is the stewardship and maintenance of knowledge. Agreed. However, stewardship does not mean preservation of method; it means preservation of rigor. The question is not whether students should suffer through problem sets, but whether they can evaluate a proof, identify a flawed assumption and know when a tool has led them somewhere wrong. If a skill genuinely requires demonstrated human mastery, and many do, test it in an exam room without a laptop. Reading books, thinking analytically, writing with precision and constructing an argument without a scaffold are not skills AI replicates well, and they are worth protecting for that reason, not out of sentiment. Deciding which skills those are is the hard institutional work Dartmouth has not publicly done. That is the conversation worth having, not whether one professor’s Claude experiment went sideways.
The administration does not need to slow AI integration. It needs to get precise about what it is actually trying to cultivate. A Dartmouth student who can direct AI toward hard problems, interrogate what comes back and build on it is more prepared for the world than one who can replicate tasks the AI will do anyway. Treating every instance of AI use as an academic shortcut is not intellectual seriousness. It is nostalgia with a syllabus attached.
Opinion articles represent the views of their author(s), which are not necessarily those of The Dartmouth.


