How do ranking systems for universities rate against each? (from Flowing Data)
There’s been lots of moaning on this blog about the utility of the ranking systems for universities (see tag cloud at right for examples). One issue of particular interest is the extent to which the ranking systems all reliably measure the same underlying construct – that of ‘university quality’. Without the required statistical intercorrelational and other analyses, it is difficult to know what to make of the differences between the differing systems.
Here’s a great post from Flowing Data that illustrates the problem well (sourcepost at the Chronicle of Higher Education). The differing systems just don’t overlap very well – which suggests they are not all measuring the same thing – no matter what anybody says!
Various ways to rate a college
There are a bunch of college ratings out there to help students decide what college to apply to (and give something for alumni to gloat about). The tough part is that there doesn’t seem to be any agreement on what makes a good college. Alex Richards and Ron Coddington describe the discrepancies.
Notice how few measures are shared by two or more raters. That indicates a lack of agreement among them on what defines quality. Much of the emphasis is on “input measures” such as student selectivity, faculty-student ratio, and retention of freshmen. Except for graduation rates, almost no “outcome measures,” such as whether a student comes out prepared to succeed in the work force, are used.
This, on top of spotty data across universities, makes rankings, especially for schools that are close in ratings to each other, difficult to know which one to follow. This goes for other types of ratings too. Any headline that starts with “Best states/countries/schools/programs/etc to…” requires some salt since rankings can change dramatically depending on the measures.
But you already knew that, right?
One thing is for sure though. UCLA and Cal stat departments are the best programs to be in. That’s fact.