Too much emphasis is on national rankings

Photo by John Nakano

Since 1983, the first year the ubiquitous U.S. News and World Report college rankings were published, rankings have been the obsession of universities and overachieving students across the country.

For many universities, having a high ranking in one of the college ranking publications such as News and World, Money or Forbes can be a make-or-break proposition for attracting the top students.

In fact, it is so important that several schools, including Atlanta-based Emory University, have admitted to manipulating entering SAT scores and other data prompting Forbes to announce this month their removal from their college rankings.

So with all that hand-wringing, are rankings really worth all the fuss? I don’t think so.

An important question to ask is what the rankings actually measure.

In their methodology, U.S. News tells us that “the rankings allow you to compare at a glance the relative quality of institutions based on such widely accepted indicators of excellence as freshman retention, graduation rates and the strength of the faculty.”

But the ranking methodology have been criticized by many academics and higher-education journalists including Deb Hourly, Vice Chair for Research and Associate Professor at Emory School of Medicine, and Joe Nocera, op-ed writer for the New York Times.

A common criticism is that the rankings are heavily weighted towards entering SAT scores, class rank and selectivity, a measure of how many students are accepted compared to how many apply.

These measures, however, may not be as accurate or indicative of success or academic excellence as surveys would suggest.

As a quick exercise, look up the top 20 universities on U.S. News National University Rankings and the 20 highest mean entering SAT scores and compare them.

I did this and there were just three of the top SAT schools not included in U.S. News: Harvey Mudd College, which is not included in U.S. News rankings because it does not offer a PhD program, Swarthmore College and Pomona College.

To its credit, U.S. News advises in its methodology that “the intangibles that make up the college experience can’t be measured by a series of data points,” and that the rankings are meant to serve as a starting point for the college search.

The problem is that many students and parents look at the rankings to determine the so-called “best” schools full stop.

As a result of this, the pressure on universities to try to improve their rankings on the lists is enormous.

Colleges may, for instance, spend resources to get more students to apply. These students are then artificially rejected, simply to artificially increase the selectivity of the college. Many of these strategies are ultimately bad for students as well as the schools which recruit them.

What about universities that decide to forgo the traditional rankings system? Where do they fall?

In 2007, there was a highly public spat between several small colleges and U.S. News.

In an editorial to the Washington Post, Michele Tolela Myers, President of Sarah Lawrence College, claimed that after removing the SAT as an admissions requirement she was informed by the director of data at U.S. News that, absent the scores, the magazine would use an arbitrary SAT score one standard deviation below the average score of their peer group.

In an official statement, U.S. News denied that it had decided how to handle colleges without SAT scores.

Ultimately, Sarah Lawrence and around 100 other colleges and universities did not participate in the reputation survey used by U.S. News for its rankings.

The prevalence of the rankings in students’ college choice, however, means that the vast majority of universities will still play the game.

Another major problem with rankings is that the prominence of class rank and standardized test scores means that a lot of the rankings are based solely on the quality of students entering as measured by these statistics.

Even if what students actually learn once they are there is minimal, the school might still achieve a high ranking. The ranking of a school has a lot more to do with its applicants than its graduates.

This essentially boils a lot of these rankings to the prestige of a university. If people believe that a university is prestigious then it will have many high-quality applicants and high selectivity. These will lead to and continue to support a good ranking even if the academic programs are less than stellar.

Surely the rankings should mostly reflect how much you can expect to learn or the value of your degree in the workplace rather than how good high school students think your college is.

Let’s step back from the particular methodology, however.

Many rankings use different methodologies that reflect things like the average entering salary of graduates or surveys of academic professionals.

The whole idea is inherently problematic because any ranking system is implicitly deciding for you what is important and what is not.

Ultimately, which university you end up going to should be based on you – how important is the student-faculty ratio to you, the amount of hands-on experience you get in nursing classes or the quality of undergraduate research programs.

While it is not wrong to use rankings to get a broad list of schools, relying too heavily on a ranking system causes you to miss out on what is important to you.

I choose Tech because of the strength and rigor of its Computer Engineering and Computer Science programs. It has great opportunities in undergraduate research and for co-ops and internships. Despite having a reputation as primarily a technical university, it has a diverse study body with people interested in music, theatre and sport. It has unique culture that embraces our nerdy aspects and wearing them with pride. This is why I love Tech.

Choosing your college should be a personal choice, not one that is forced on you by essentially arbitrary website rankings.

So please: when you talk to your old high school buddies about Tech, don’t just say “we’re ranked #36.”

Advertising