Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
April 18, 2024 | Latest Issue
The Dartmouth

Adkins: ChatGPT Threatens Personability and Credibility in Journalism

The replacement of human journalists with language-based AI is a frightening yet real prospect.

The last few months have been filled with conversations about ChatGPT, a language-based AI that answers user questions with a detailed response. Users can input questions ranging from “find me a recipe” to “summarize Titanic.” We all seem to be attempting to understand this artificially intelligent chat bot, while staying wary of its potential dangers. Though ChatCPT has many potential benefits, I argue that its use in journalism poses flaws and even dangers.

Arguments for using ChatGPT to aid in fields like journalism seem intuitive: Why not make writing easier? Why not make it faster and more efficient? However, if ChatGPT begins selecting sources, creating questions to ask and even writing articles for us, then the integrity of journalism may be threatened. We’re already seeing this in some publications, such as Microsoft. While language-based AI may be effective at churning out content, its arguments are still vague, and there is no way to know if the sources it draws from are credible. 

ChatGPT may aid journalists in expressing thoughts. Yet because ChatGPT uses data scraped from the internet to answer questions with the most likely answer, its information may be false or biased. In fact, I played around with ChatGPT and asked “how many countries start with the letter V?” ChatGPT confidently assured me that there were only 3 countries that started with V, “Vanuatu, Vatican City, and Venezuela”: but what about Vietnam?

In addition, ChatGPT clouds credible sources, which may perpetuate distrust in the media. Dr. Lucy Kueng, senior research associate at the Reuters Institute of Oxford University, states, “The early crop of AI tools can’t distinguish between fact and fiction or overcome bias. Algorithmically generated news can quickly become misinformation at scale, and that misinformation can flow into the AI models, perpetuating it.” Therefore, ChatGPT can actually further propagate fake news — or false information in media — because of its inability to decipher fact and fiction. 

That being said, my concern is that proper journalism relies on connecting with people, understanding the emotions of people and sharing peoples’ stories with a wider audience. Already, this is being threatened as companies are automating the jobs of journalists According to the BBC, “Microsoft is to replace dozens of contract journalists on its MSN website and use automated systems to select news stories,” by as soon as June. Low-level journalism could be replaced by an AI model, but it cannot accurately describe the gravity of events in that locality.

While AI may be a more economically sound option for small-market news companies, the automation of journalism is a threat to its personability. If AI is writing news stories, the link between the stories themselves and one’s experience of these facts will become further disconnected. How is an AI model expected to understand local events that may have a massive emotional or psychological impact on people? Because AI amalgamates information from so many information sources, it generalizes emotions rather than fully capturing the points of view that must be highlighted.

I gave ChatGPT a fair shot at responding to my claims by presenting it with the first half of my article and asking, “What are the counter points to my arguments?” Here’s what it had to say: “Local journalism often faces economic challenges, and many communities are underserved in terms of news coverage. ChatGPT can help bridge this gap by providing automated coverage of routine local events and freeing up resources for journalists to focus on in-depth investigative reporting or issues that require human expertise. This can lead to a more comprehensive and accessible news landscape.” 

When I read this counter argument, I struggled to see how AI would lead to a more “comprehensive” journalism landscape. I would argue that if we rely on AI to produce journalism, we would miss out on the diverse perspectives that human journalists bring to the table, and whose firsthand knowledge of a person or community may better represent it in writing. Additionally, automated information may go unchecked, and there is no one to be held accountable except the AI itself for any claims that may be false.

Furthermore, ChatGPT cannot conduct real-time, on-the-ground interviews in a given moment, therefore it is unable to capture a person’s fear, joy or disappointment in a historical moment. Highly regarded photojournalist and Dartmouth alumni James Nachteway has noted that journalism “doesn’t just record historical events, [it] shapes public thought in historical moments.” If the future of journalism is ChatGPT, we will say goodbye to reading stories that are deeply moving and grounded by the firsthand experiences of people. Real-time re-evaluation of opinions during interviews and the introspection that comes with being a journalist will give way to the narratives ChatGPT has mapped out for us. 

Another counter argument ChatGPT had was that it “can serve as a tool for journalists to explore new narrative formats, experiment with storytelling techniques and engage readers in unique ways,” and “provide inspiration and generate fresh ideas that journalists can build upon and expand.”

I was a little put off by the lack of specificity provided by ChatGPT, and when asked for examples, the response was even vaguer: 

“By using AI, ChatGPT could help generate dynamic responses and adapt the story based on user input, creating a more immersive and personalized reading experience.”

However, the personalized experience promoted by ChatGPT may allow users to entrench themselves in their beliefs further as they stay insulated from any challenging information. 

While ChatGPT claims that it “can engage with readers in conversational formats,” the examples provided in this conversation did not answer the questions effectively and further solidified my thoughts about language-based AI. Simply put, there was no conversation, merely a commitment to drive inaccurate points with no accountability. 

If we turn to ChatGPT instead of human journalists, we may lose context, human emotions and the institution that has fed public discourse since its inception. While journalism may seem mundane, it is the foundation of public opinion. It’s time to recognize that the replacement of human journalism is a frightening yet real prospect.

Opinion articles represent the views of their author(s), which are not necessarily those of The Dartmouth.