After a four-year long study involving "Raiders of the Lost Ark" (1981), psychology professor James Haxby could probably quote the film word for word and image for image. Haxby used test subjects' reactions to "Raiders" in combination with other visual stimuli to discover a "common code" for the way in which individual brains process visual information and translate images into perceptions, Haxby said in an interview with The Dartmouth.
Haxby published his findings, titled "A common, high-dimensional model of the neural representational space in human ventral temporal cortex," in the Oct. 20 issue of the neuroscience journal Neuron.
Haxby, who began the project in 2008, has worked closely with psychology graduate student Swaroop Guntupalli and electrical engineers from Princeton University to examine the ways in which patterns in the brain determine people's visual perception, he said.
"When we look at the world, the visual images are transformed into a brain code, or patterns of activity that capture all the subtle distinctions we see between two objects or faces," Haxby said. "The question is whether we have the same mechanisms for translating visual images into patterns as others, or if we each develop our own systems so that we can communicate with one another, but in reality are having completely different experiences." One of the study's major accomplishments was the discovery of a mechanism for translating each individual's brain patterns into a common code. The baseline allowed the researchers to determine which stimulus a subject was looking at based on the similarity of his brain activity patterns to other subjects' patterns. In the past, analysis of visual perception has been hindered by researchers' need to "build analysis anew for each individual brain" because researchers could not compare individual perceptions, Haxby said.
Haxby first began to explore visual perception in 2007 and presented a preliminary version of his findings to the Society for Neuroscience in 2009. Since then, the study has evolved to include more test subjects and to refine the idea of a common code for perception, he said.
"In this paper, we reduce each possible image down to 35 aspects of the visual experience," Haxby said. "We get a single number for each of these aspects that is specific to the image in question, and that's the code for the way an individual perceives that image. The cool thing about it is that with this procedure we can get a set of parameters for each individual from the responses we record when they watch Raiders of the Lost Ark,' and once we have those responses we can convert the person's brain pattern into the common code."
In addition to "Raiders," researchers showed subjects two sets of images while they were connected to a brain scanner, according to Guntupalli. The first set of images included a male human face, a female human face, monkey and dog faces, a house, a shoe and a chair. The second set included images of different types of primates, birds and bugs to measure how animal species are perceived in the brain, Guntupalli said.
"With the different experiments, we wanted to see if we could construct models to predict what images people are seeing based on their brain activity compared to the brain activity of others when they saw that same image," he said. "For the movie aspect, the validation of the experiment would be to see if we could predict what part of the movie a subject was watching based on their brain patterns."
Measuring brain activity by showing movies is a common practice among neurologists and psychologists, according to Haxby.
"When people watch a Hollywood movie, activity is seen in almost one-third of the brain, as opposed to 3 to 5 percent in a typical psychological study," he said. "Movies are unconstrained, and driven by plot and story line. We wanted to pick a movie that would give us a broad sample of visual experiences, one that was engaging and something you could watch two or three times and still enjoy, so we settled on Raiders' for our specific study."
When Haxby and Guntupalli presented the early findings of the study in 2009, the neuroscience community reacted with "skepticism," according to Guntupalli.
"I don't know if people thought it was too good to be true, or if people wanted more proof," Guntupalli said. "I do know that if [Haxby] hadn't been there with me they probably wouldn't have believed me. With this publication, people are accepting that the common code works and are starting to understand its significance."
That significance, according to Haxby, is enormous.
"As scientists, we care because this is a fundamental question about the way the brain encodes information and about the basis of experience," he said. "But having the common code can be applied in so many ways, perhaps most excitingly for the study of clinical neurological conditions. For example, we can look at whether or not people with autism have a different type of visual experience, especially for things like face shape, and how perception is altered or abnormal based on things like expertise and cultural experience."
Haxby is already considering future applications of the common code in studying biological motion, social cognition and auditory perception, he said.
"One of the interesting studies we're planning is to work with blind people and see how perception is different for people who have had no visual experience," Haxby said. "The potential for this kind of brain reading to actually dig out and produce the contents of people's thoughts is just getting started. It's a very hot area."
The Dartmouth team plans to continue collaboration with Princeton researchers in future experiments, Guntupalli said.
"Having the Princeton researchers was enormously helpful because their electrical engineering focus meant that we had different methods of approaching the same goal," he said. "They were good at the computation approach and combined with our neurological focus it was a good partnership."