Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
April 27, 2024 | Latest Issue
The Dartmouth

Nelson: Why You Should Ignore College Rankings

College rankings like those by U.S. News are deeply flawed and cannot measure the student experience at universities.

In the emotional whirlwind that is applying for college, there is one beacon of hope, one storied institution that promises to make your decision for you: the fabled college ranking. These annual ranking lists claim to be able to empirically determine which colleges are the best and help confused, young students choose their home for the next four years. The data shows that roughly two-thirds of college students consider these rankings, and among students with higher standardized test scores, the figure rises to more than 80%. Despite this attention, rankings such as those provided by U.S. News are a flawed way of evaluating universities and should not be considered by applicants or students.

Dartmouth students pay keen attention to rankings as well. When U.S. News & World Report, the most prominent publisher of college rankings, released their annual list last September, Dartmouth dropped to number 18 in the National University category. The anonymous posting app Fizz provided a glimpse into student reactions. One post calling U.S. News “a bunch of f****** idiots” received over 1,400 upvotes. Another post calling the list “bs” received over 1,300 upvotes, and in an anonymous poll, 860 or 83% of students who took the poll stated that they disagreed with Dartmouth’s placement. It is clear that some Dartmouth students derive a sense of pride from the reputation of our institution and use rankings as a proxy for prestige. 

This obsession with rankings is unwarranted and not worthy of our consideration. Not only is U.S. News’s methodology unable to adequately gauge the student experience at universities, it also incentivizes harmful behaviors from students and universities alike. Furthermore, changing methodologies make rankings unreliable, and some aspects of their appraisals include a level of subjectivity that discredit the rankings entirely. 

U.S. News publishes the ranking criteria that they use to compare schools, and several issues are apparent in their reasoning. The largest factor contributing to a school’s ranking, at 20%, is the mysterious peer assessment. This consists of hundreds of school administrators ranking schools based on perceived quality. These rankings are subjective, as the administrators are asked to rank schools with whom they likely have no intimate experience and only represent superficial depictions of schools’ reputations rather than any evaluation of their merits. Standardized test scores make up 5% of the ranking, a statistic that no longer carries any weight as test-optional policies filter out low scores at most institutions. 

Another factor called “graduation rate performance” determines 10% of the ranking, which U.S. News defines as a comparison of graduation rates to U.S. News’s internal predictions of those same rates. Little more than an arbitrary guess, this statistic has little relevance to any student’s college experience. 

Perhaps as telling as the included factors are the considerations that the rankings do not include. There is no mention of student satisfaction or happiness, no mention of tuition or financial accessibility and no inclusion of class sizes, one of Dartmouth’s largest strengths. Job placement and starting salary after graduation are also missing from the criteria, which is ironic considering that this is potentially the one domain where a university’s prestige may influence our lives. There are no measures of extracurricular participation, athletic opportunities or campus life in any capacity. In fact, there is no student input considered at all. One common thread found across this methodology is that very few of the metrics address the student experience. This is an inherent flaw in the ranking industry and is part of the reason why rankings cannot meaningfully define any school. 

Furthermore, U.S. News’s methodologies are constantly changing. In 2023, U.S. News changed its criteria significantly, and as a result, 25% of schools in the National Universities rankings experienced changes in ranking of 30 places or more. This gives the impression that schools are rapidly improving or declining, when in reality, U.S. News’s own ever-changing criteria is responsible for the vast majority of movement. This is part of U.S. News’s business model: if the rankings never change, nobody would have any reason to engage their website. It must be understood that when a school’s ranking changes, it has nothing to do with the quality of education or experience. There is no reason to give these protean rankings any weight when changes in rankings only represent U.S. News’s profit strategy and do not reflect changes made by schools.

Additionally, some of the criteria can incentivize harmful behaviors from colleges. Since the rankings can influence admissions statistics and reputation, colleges may benefit from pandering to the ranking categories. For example, 4% of a college’s ranking is determined by the quantity and quality of faculty research. This can incentivize schools to hire professors who have a heavy focus on research instead of those who will be the best instructors to undergraduate students. 8% of a college’s ranking comes from school spending per student, incentivizing schools to overspend and potentially leading to future tuition increases. 

The rankings’ new focus on graduation rates — 37% — can also create perverse incentives. As Columbia University professor Michael Thaddeus, an outspoken critic of rankings, put it, “When U.S. News emphasized selectivity, the elite universities were drawn into a selectivity arms race and drove their acceptance rates down to absurdly low levels. Now it instead emphasizes graduation rates, and it is not hard to foresee that these same universities will graduate more and more students whose records do not warrant it, just to keep graduation rates high.” While it is difficult to say just how much grade inflation will be affected, the system does create incentives that are contrary to the missions of institutions of higher learning. When you consider the damaging effects that rankings can have on high school students’ college decisions, rankings do more harm than good.

Schools are so much more than rankings, and every person sees every school differently. Artificially determined rankings cannot be true for everyone because they are an attempt to rationally compare things that can’t easily be compared. “The best school” means something different to everybody, and yet, the rankings suggest that they can accurately determine objective prestige and academic quality. But, these rankings don’t measure prestige, they arbitrarily manufacture and maintain senses of prestige. These are inherently hollow and subjective, and it is unhealthy to care about such a meaningless metric of status. Our sense of self-worth should not come from the status of our institutions, and our perceptions of institutional quality certainly should not come from flawed rankings. For the good of colleges, prospective students, student bodies and all of us as individuals, college rankings should be ignored.