Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
April 28, 2024 | Latest Issue
The Dartmouth

Montgomery Fellow Cal Newport ’04 addresses public concerns about AI

During the Q&A section of the lecture, attendees shared questions about the trustworthiness of AI and how it may be regulated.

newport-ai-trustworthiness.jpg
Photo by Katherine Lenhart

On July 12, computer scientist and Summer 2023 Montgomery Fellow Cal Newport ’04 gave a lecture about the impacts of the latest innovations in AI, titled “How Worried Should We Be About AI?” The talk was attended by approximately 60 people, mostly from the Upper Valley community. 

Newport recently wrote a piece for The New Yorker about how intelligent large language models work. Newport’s lecture drew on the article to explain how large language models such as ChatGPT function. Then, Newport addressed four principal concerns: Is ChatGPT alive? Does Chat GPT present a looming existential threat? Will ChatGPT take jobs? Will ChatGPT change jobs? 

Newport opened the talk by clarifying that ChatGPT is not alive.

“When we have an interaction with ChatGPT, it can really seem like there is a consciousness on the other side,” he said. “But the architecture and operation of a large language model is incompatible with any common sense notion of self awareness or consciousness. ”

Newport calmed fears that ChatGPT is a looming existential threat to humanity because it is not capable of making autonomous decisions. Nonetheless, Newport said, there is a threat that people can use ChatGPT to produce effective disinformation or scams. 

Attendee Josh Paul ’25 stated that he “was not expecting Newport to be so optimistic about AI.” Paul said he had assumed that intelligent large language models have to be regulated extensively, almost to the point of being banned because of their ability to spread misinformation.

Paul added that it was also “reassuring” to hear that AI would not “suddenly become sentient.”

Rich Brown, a former employee of Dartmouth’s computer services department, also attended the talk. Brown said he attended another discussion about AI at his local library in Lyme recently, and he left feeling curious about the trustworthiness of AI.

“I’m retired, but I play with computers,” Brown said. “I’ve asked ChatGPT for some silly things like making a webpage or writing a poem, but I know people who are using it for considerably more complicated work, like generating code.”

Going into the lecture, Brown said he was worried about AI’s ability to produce false information, which Newport affirmed.

“Turns out you still have to test this stuff,” Brown said. “AI lies.” 

In response to the third concern — that ChatGPT will take human jobs — Newport said that ChatGPT will not slash jobs because large language models are not good at “highly specialized tasks” and do not have a nuanced understanding of how to manage people or particular organizations.

“The most likely future is not that these productivity enhancing language models are going to eliminate massive numbers of jobs, so much as they will change the nature of those jobs,” Newport said.  

In response to the final concern about whether AI will change jobs, Newport said he believes that the nature of knowledge-work jobs will change. For example, Newport said AI has the capacity to transform office work and introduce changes in academia. However, Newport said he is not concerned that AI will upend the job market. 

The talk ended with audience questions, which focused on the creative capacity and role in academia of AI. Weighing whether ChatGPT should be banned in schools as a cheating tool or embraced as a pedagogical tool, Newport responded in favor of AI as a teaching tool, stating that ChatGPT cannot accomplish advanced levels of writing. 

Intelligent large language models are not capable of “interesting human creativity,” Newport said.