Measurement bias in university rankings: University Ranking Watch
There have been many comments on this blog about the lack of basic attention to measurement theory where the construction of rankings for university quality is concerned. A previous post here asked
How long can it be before someone conducts an inter-correlational analysis to see to what extent all of the differing ranking systems are actually measuring the same thing? This kind of statistical meta-analysis using all of the data from the various ranking systems [from relative web presence to citation measures to student employment to student satisfaction to grants gained to patents published to spin-outs started to teaching quality assurance to research assessment, etc., etc.] should reveal a statistical ‘positive manifold’ (i.e. they are all highly inter-correlated), if they are actually measuring the same thing.
Here is a partial comparative analysis, for Cambridge University and for France. Money quotes:
The national bias of the Paris Mines ranking is indisputable. There the top French institution is in sixth place. In the most recent THE-QS rankings the top French institution was 38th, in the Russian RaTER rankinigs 36th, in the Shanghai Aacademic Ranking of World Universities 40th, in the Taiwan rankings 88th and in Webometrics 129th.
The old THE- QS rankings were pretty obviously biased in favour of British universities. Last year it had Cambridge in second place. The Shanghai rankings put it in 4th place, although that will not be sustained as the impact of old Nobel winners fades. In the Paris Mines ranking it was 7th, in the Russian rankings 8th, in the Taiwan rankings 15th, in Webometrics 22nd , in Scimargo 34th and in the Leiden green index (the size-independent, field-normalized average impact) 37th.
Not a proper statistical analysis, obviously, but these examples of huge inter-ranking variability have to be a major cause of concern.