Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
March 28, 2024 | Latest Issue
The Dartmouth

A Look Back on the Dartmouth Summer Research Project on Artificial Intelligence

At this convention that took place on campus in the summer of 1956, the term “artificial intelligence” was coined by scientists.

Screen Shot 2023-05-18 at 12.28.58 PM.png

For six weeks in the summer of 1956, a group of scientists convened on Dartmouth’s campus for the Dartmouth Summer Research Project on Artificial Intelligence. It was at this meeting that the term “artificial intelligence,” was coined. Decades later, artificial intelligence has made significant advancements. While the recent onset of programs like ChatGPT are changing the artificial intelligence landscape once again, The Dartmouth investigates the history of artificial intelligence on campus.   

That initial conference in 1956 paved the way for the future of artificial intelligence in academia, according to Cade Metz, author of the book “Genius Makers: the Mavericks who Brought AI to Google, Facebook and the World.” 

“It set the goals for this field,” Metz said. “The way we think about the technology is because of the way it was framed at that conference.”

However, the connection between Dartmouth and the birth of AI is not very well-known, according to some students. DALI Lab outreach chair and developer Jason Pak ’24 said that he had heard of the conference, but that he didn’t think it was widely discussed in the computer science department. 

“In general, a lot of CS students don’t know a lot about the history of AI at Dartmouth,” Pak said. “When I’m taking CS classes, it is not something that I’m actively thinking about.”

Even though the connection between Dartmouth and the birth of artificial intelligence is not widely known on campus today, the conference’s influence on academic research in AI was far-reaching, Metz said. In fact, four of the conference participants built three of the largest and most influential AI labs at other universities across the country, shifting the nexus of AI research away from Dartmouth.

Conference participants John McCarthy and Marvin Minsky would establish AI labs at Stanford and MIT, respectively, while two other participants, Alan Newell and Hebert Simon, built an AI lab at Carnegie Mellon. Taken together, the labs at MIT, Stanford and Carnegie Mellon drove AI research for decades, Metz said.

Although the conference participants were optimistic, in the following decades, they would not achieve many of the achievements they believed would be possible with AI. Some participants in the conference, for example, believed that a computer would be able to beat any human in chess within just a decade. 

“The goal was to build a machine that could do what the human brain could do,” Metz said. “Generally speaking, they didn’t think [the development of AI] would take that long.”

The conference mostly consisted of brainstorming ideas about how AI should work. However, “there was very little written record” of the conference, according to computer science professor emeritus Thomas Kurtz, in an interview that is part of the Rauner Special Collections archives. 

The conference represented all kinds of disciplines coming together, Metz said. At that point, AI was a field at the intersection of computer science and psychology and it had overlaps with other emerging disciplines, such as neuroscience, he added. 

Metz said that after the conference, two camps of AI research emerged. One camp believed in what is called neural networks, mathematical systems that learn skills by analyzing data. The idea of neural networks was based on the concept that machines can learn like the human brain, creating new connections and growing over time by responding to real-world input data.

Some of the conference participants would go on to argue that it wasn’t possible for machines to learn on their own. Instead, they believed in what is called “symbolic AI.” 

“They felt like you had to build AI rule-by-rule,” Metz  said. “You had to define intelligence yourself; you had to — rule-by-rule, line-by-line — define how intelligence would work.”

Notably, conference participant Marvin Minsky would go on to cast doubt on the neural network idea, particularly after the 1969 publication of “Perceptrons,” co-authored by Minsky and mathematician Seymour Paper, which Metz said led to a decline in neural network research.

Over the decades, Minsky adapted his ideas about neural networks, according to Joseph Rosen, a surgery professor at Dartmouth Hitchcock Medical Center. Rosen first met Minsky in 1989 and remained a close friend of his until Minsky’s death in 2016.

Minsky’s views on neural networks were complex, Rosen said, but his interest in studying AI was driven by a desire to understand human intelligence and how it worked.

“Marvin was most interested in how computers and AI could help us better understand ourselves,” Rosen said.

In about 2010, however, the neural network idea “was proven to be the way forward,” Metz said. Neural networks allow artificial intelligence programs to learn tasks on their own, which has driven a current boom in AI research, he added.

Given the boom in research activity around neural networks, some Dartmouth students feel like there is an opportunity for growth in AI-related courses and research opportunities. According to Pak, currently, the computer science department mostly focuses on research areas other than AI. Of the 64 general computer science courses offered every year, only two are related to AI, according to the computer science department website. 

“A lot of our interests are shaped by the classes we take,” Pak said. “There is definitely room for more growth in AI-related courses.”

There is a high demand for classes related to AI, according to Pak. Despite being a computer science and music double major, he said he could not get into a course called MUS 14.05: “Music and Artificial Intelligence” because of the demand.

DALI Lab developer and former development lead Samiha Datta ’23 said that she is doing her senior thesis on neural language processing, a subfield of AI and machine learning. Datta said that the conference is pretty well-referenced, but she believes that many students do not know much about the specifics.

She added she thinks the department is aware of and trying to improve the lack of courses taught directly related to AI, and that it is “more possible” to do AI research at Dartmouth now than it would have been a few years ago, due to the recent onboarding of four new professors who do AI research. 

“I feel lucky to be doing research on AI at the same place where the term was coined,” Datta said.