RANKINGS RESEARCH | May is traditionally a time when high- school students kick up their heels, burn their books, and prepare to ride out a homework-less summer. But upcoming high-school seniors across America are already poring over the books that loom large in the next phase of their educational careers: college ranking guides.
With the help (or hindrance) of their anxious parents [see “Alumni Voices,” this issue], seniors are cracking the spines of glossy, freshly published guides like the Princeton Review’s Gourman Report, Rugg’s Recommendation on the Colleges, and the hallowed U.S. News & World Report’s “America’s Best Colleges”—all in order to figure out the best colleges they have a shot at getting into.
But should they be? How accurately do these guides capture college quality, anyway? That’s the question that Andrew Metrick, professor of finance at Penn’s Wharton School, and his three colleagues had in mind when they designed their new, market-driven rankings system. Published in October on the Social Sciences Research Network, “A Revealed Preference Ranking of U.S. Colleges and Universities” outlines a form of college rankings based on the desirability of each college as viewed by applicants.
“We were all talking about a certain amount of frustration with the college rankings as we saw them,” Metrick says of himself and his co-authors, Christopher Avery and Caroline Hoxby of Harvard, and Mark Glickman of Boston University. “The measures that are out there attempt to be a measure of quality, but are essentially an arbitrary collection of what one person thinks is important. We would rather see a ranking system that ranks based on desirability, based on what students actually do rather than what some committee thinks is important.”
The U.S. News & World Report’s ranking system bases its findings on a complicated algorithm that takes into account variables such as admissions, retention, and graduation rates; financial-aid packages; student-to-faculty ratios; and school selectivity. Metrick’s system, by contrast, rates more than 100 of America’s top colleges solely on the basis of where students decide to go after receiving all their admissions offers. Using a model styled after the tournament ranking systems used in chess and tennis, the study puts each of the colleges where the student was admitted in a head-to-head competition, with the “winning” college being the one in which the student actually enrolls.
“There’s a policy angle on this,” Metrick explains. “Many college admissions officers will tell you—off the record at least—that yes, the fact that there are rankings systems out there that take account of things like admission and matriculation rates as a measure of how selective the school is” puts pressure on schools to maneuver their way into a higher ranking. “These measures are easily manipulated, and schools do manipulate them, to the detriment of students,” Metrick adds.
Such policies as Early Decision—in which students can apply to a college in November and receive a decision by late December, but must promise to attend that school if admitted—bump schools upwards in the ranking game. Critics also contend that, in order to boost their perceived selectivity and desirability, colleges have taken to encouraging applications from students unlikely to be admitted and rejecting suitable applicants who they think are applying to their school as a “safety” and are less likely to attend if offered a spot.
“The only way to manipulate our measurement,” says Metrick, “is to make your school more desirable.”
The study looked at 3,240 seniors from 510 high schools who were identified as high-achievers by their guidance counselors, in accordance with criteria provided by the authors. These seniors were surveyed twice over the course of the academic year to gather information about their backgrounds, applications, and test scores, as well as the final outcomes of the admissions process. Financial aid and scholarship offers were taken into account in the final ranking process.
The new system hasn’t been without detractors, however. “One of the main questions has been, ‘Well, why do you think that high- school seniors are somehow the ones that have the best idea of how a school is?’” Metrick says. “That question is closely tied to a related critique that says we’re just measuring fashion, fads in what people like, and that we should be measuring how much education someone gets 10 years out of school.
“Those critics are fundamentally attacking the wrong thing,” he asserts. “We’re not attempting to measure the quality of a school’s education; we never wanted to do that. Instead, we’re attempting to measure how desirable schools are to high-school seniors. We just happen to think that measure is useful.”
For all the flak Metrick and his co-authors have been taking over the validity of their system, the “biggest surprise” to them is the fact that “the rankings come out looking so unbelievably reasonable.” Most of the schools in the top 20 of U.S. News & World Report are also in the top 20 of the preference ranking system. Harvard and Yale come out ranked one and two, followed by Stanford, CalTech, MIT, Princeton, Brown, and Columbia. Penn, which is ranked fourth in the U.S. News report, comes in 12th in the new system.
Metrick believes Penn would do better with newer data—the study data is from 2000. “Perceptions of top universities don’t really change much even over a period of 20 years, but Penn is really an exception to that rule in terms of how we’ve done over the past five years in high-school seniors’ perception of our desirability,” he says. “Penn is one of the few schools whose stock has really risen.”
The controversy over college rankings is “really just the juice that gets people to listen,” Metrick stresses. “We think the real content here is trying to continue a dialogue about the pressures that high-school students are under, and the pressures that colleges are under to make smart admissions decisions.”
—Alison Stoltzfus C’05