Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
April 26, 2024 | Latest Issue
The Dartmouth

Arabian: Let’s Chat

The College should set clear guidelines for the efficient and ethical use of ChatGPT and similar tools in its classrooms.

2023-02-02 22.20.06.jpg

This column is featured in the 2023 Winter Carnival special issue. 

Last November, OpenAI launched its fine-tuned natural language processing tool, ChatGPT. Originally intended to imitate human conversations, the tool has found a number of creative uses: ChatGPT can compose your emails, draft your essays and even code in multiple languages. It may be a bit trite these days to claim that artificial intelligence is changing the world, but for academia and its current consensus on plagiarism, it is inevitable that change is coming. And the field is only in its infancy: In the last half-decade, companies have achieved major progress in nearly every aspect of artificial intelligence, with no signs of slowing down. Given the inevitability of this march forward, faculty at the College should consider incorporating ethical uses of artificial intelligence into their course syllabi — and even actively encouraging students to use it.

ChatGPT and other natural language processing tools are here to stay, regardless of how naysayers may feel about it. Anticipating its more nefarious uses, Princeton University senior and computer science major Edward Tian developed a tool named GPTZero to “quickly and efficiently detect” whether an essay is written by ChatGPT or a human. And, to his credit, GPTZero got it right about 98% of the time. However, desperate students are a highly creative bunch. Popular plagiarism-catching websites, such as TurnItIn and SafeAssign, cannot detect the work of QuillBot, a state-of-the-art paraphrasing tool. Thus, people almost immediately began copy-pasting their responses from ChatGPT into QuillBot, reducing the success rate of GPTZero to, well, zero. The combination of ChatGPT and QuillBot presents a near-insurmountable challenge for administrators hoping to catch would-be cheaters.

During my tenure as editor of this opinion section, my philosophy with ChatGPT has been: “If you can’t beat ’em, join ’em.” Throughout the term, I have regularly used ChatGPT to anticipate potential counterpoints to arguments posed by our writers — and it has consistently made their articles stronger. Of course, ChatGPT is not itself an original thinker; rather, it uses its bank of parameters (about 175 billion of them) to converse with its users based on pre-existing inputs. Thus, while it cannot articulate arguments of its own, ChatGPT is an excellent tool for surveying ideas previously articulated by real people. I will not hesitate to admit that the program even played a small role in anticipating counterpoints to this very submission — some of which I would not have addressed otherwise. In my capacity as editor, I have also used the tool to monitor for logical fallacies, structure long and multi-dimensional arguments and even spark inspiration for ideas I had not previously thought to explore. While far from perfect, ChatGPT is an undeniably powerful tool in the arsenal of any writer or editor. To deny its use within the classroom would be depriving students of skills that have become increasingly necessary in the professional world.

The College must clarify its position on natural language processing tools and educate its students and faculty on their effective and ethical use. It is unclear today whether the Committee on Standards would consider, say, using a ChatGPT-generated outline but written by a student to be an instance of academic dishonesty. The technology is too new. While certain schools have either embraced or banned the program, there is not yet any “case law” for it at the College. An all-faculty meeting to build consensus around artificial intelligence in the classroom may be necessary to rectify this. When it comes to its smaller nuances, professors may consider revisiting their course syllabi to discuss how exactly they prefer ChatGPT to be used. This would not only give students some much-needed clarification regarding academic dishonesty but also offer them professional guidance on best practices.

To begin, as with any source, students must acknowledge their use of ChatGPT — especially if it generated any actual text for them. This should come as a surprise to no one. Writers typically acknowledge the sources they consulted in the references or bibliography of their papers, and this is essentially the same thing. In this case, writers may also consider including a mention of ChatGPT in their acknowledgements, a section typically reserved for those who personally or professionally supported the writer. Finally, a third option would be to list the chatbot as an author itself; this should be reserved, however, only for those who had it write a significant portion of the paper and with explicit permission from the professor.

In addition to its ethical ambiguities, there is another danger for students using ChatGPT: The  content it generates is often awkward and simplistic and would doubtlessly fail some of the more rigorous standards of academia. Of course, the fact that a computer can even produce such content is impressive; however, in its current state, ChatGPT still seems to be leagues behind professors and students at the College. To test my intuition, I asked five of my (human) friends, then ChatGPT, to write a short story about someone climbing Mount Everest. I then showed the product to five other friends — and, sure enough, all of them were able to distinguish the ChatGPT-generated response from the rest on the basis of its poor writing. Thus, beyond the threat of academic dishonesty, students have other reasons not to allow ChatGPT to write for them without proper supervision. For their part, professors may remind students that the quality of their writing is ultimately their own responsibility. 

Further, ChatGPT tends to produce arguments that sound plausible but are actually factually inaccurate or utter nonsense. It confidently tells users entirely false “facts,” like “former Apple CEO John Sculley was responsible for the iPod” — a product released almost a decade after he left the company. ChatGPT may cause those who do not detect this error to accept and eventually spread inaccurate information to others. A common reason for such inaccuracies is that ChatGPT is not currently connected to the internet. Therefore, asking it to do any sort of research or synthesis on current events is already a non-starter. However, even if you did this research yourself and fed it into ChatGPT, you still would not find it articulating any intelligent conclusions. After all, the program relies on those who have previously written on a topic; if the topic is too recent, it cannot play “the imitation game,” as Alan Turing  — the namesake of the famous “Turing Test” to assess the intelligence of a computer — referred to it. Just as students would be skeptical of pulling factual information from certain websites, such as Wikipedia or Quora, they should think twice before over-relying on ChatGPT.

In 2022, the field of artificial intelligence expanded beyond what many, including myself, thought was even possible. ChatGPT and similar programs offer writers and editors the unprecedented ability to critique their own arguments, detect internal logical inconsistencies and structure their thoughts — all within seconds of hitting the keyboard. Used correctly, this is a valuable tool that, if trends hold, will only get better over time. However, this same tool could also unleash a flurry of ethical and professional crises into academia, from plagiarism to the propagation of inaccurate information. Thus, the College should devise and train its faculty on a set of guidelines designed to get the most out of ChatGPT, while avoiding its pitfalls.

Kami Arabian is an opinion editor at The Dartmouth.