Geisel professor wins Excellence in Medical Education Award
Do medical students and their clinical evaluators agree on what constitutes actionable, constructive and helpful feedback? That is the question that internal medicine clerkship director and Geisel School of Medicine professor Hilary Ryder attempted to answer in her study, entitled “Understanding what we say: varying cultural competency amongst faculty evaluators on the internal medicine clerkship.” Through her research, which she conducted alongside University of Texas at Austin professor and former Dartmouth anthropology professor Lauren Gulbas, Ryder found that faculty evaluators and medical students often do not agree on what helpful evaluations look like.
In recognition of her work, Ryder was awarded the Association of American Medical Colleges Northeast Group on Educational Affairs Excellence in Medical Education Award on Apr. 28.
Ryder said that while they are in medical school, medical students are evaluated on the basis of their clinical performance. As medical students rotate on a service during their clerkships, residents and faculty fill out evaluations with scale questions and comment boxes to describe each student’s strengths and weakness, she said, adding that students also receive feedback on their patient interactions.
This feedback is intended to help students improve their clinical work and serve as more effective clinicians in the future, Ryder said. Additionally, this feedback is used by deans in writing promotional paperwork. When writing recommendation letters for residency program applications, deans rely on the observations of supervising faculty and residents, Ryder added.
Ryder said her goal for the study was to examine whether students understand the feedback they are receiving and whether faculty understand the feedback that they are writing.
By interviewing medical school students in the first part of the study, Gulbas and Ryder found that students had a shared model of how to understand evaluation comments. As a result of these findings, Ryder said she was able to make recommendations to medical school faculty on how to create a written summary that is useful to students. Students generally found that specific details are more useful than generic feedback, she said.
To collect data, Gulbas and Ryder’s research team took sample comments from different student performance evaluations and separated them into “meaning units,” where each meaning unit covers one topic of evaluation. Students were then shown these comments and asked whether or not they were helpful. Through multi-dimensional scaling, the researchers were able to develop a model of what constitutes a helpful comment.
William Guerin MED’17, currently a medical resident at the Contra Costa Family Residency Program, was part of Gulbas and Ryder’s research team. Guerin said that “the primary work [he] did was gathering all of the data and conducting the 40 some interviews [the team] did with the students.”
The second part of the study, whose content provided the basis for the AAMC award, focused on faculty understanding of evaluations. Gulbas and Ryder studied whether faculty understanding of evaluations is the same as that of students.
Ryder and her team discovered that while some faculty members had a shared understanding of comments with students, some differed in their understanding and believed that all feedback was helpful. After discovering these results, Ryder said she sought to determine whether different faculty characteristics contribute to differences in understanding what constitutes helpful comments.
Ryder said she found that faculty with “in-depth non-clinical experience teaching had a shared model [of viewing comment helpfulness] with students.” A higher level of non-clinical student interactions, in the classroom and through mentoring, is correlated with a higher level of faculty-student agreement over the helpfulness of a comment, she noted.
“Faculty who had focused on graduate medical education or were more engaged in research — who were less engaged in formally teaching medical students — were the ones less likely to discriminate between what was helpful,” Ryder said..
Since the publication of her research, Ryder said that Geisel faculty have participated in professional development to address the issues raised in the study. She said that there were improvements in the quality of faculty evaluation comments after this training.
Guerin said that his participation in the study helped him improve as a student.
“[Getting] to talk to attendings about what they think is helpful and how they are trying to teach [and] … their pedagogic methods was invaluable,” he said.
In addition to helping Guerin improve as a student, the study also enabled him to provide feedback to medical students, his peer medical residents and supervising faculty members, according to Guerin.
“When I think about giving feedback now, anything I write, I try to be specific,” he said.
Ryder said that she is looking into the possibility of replicating the study at Albany Medical College and the Dell Medical School at the University of Texas at Austin. She added that in the future, she may also look at analyzing faculty members from different medical specialties, as this initial study only looked at internal medicine faculty.