A review of ranking systems for universities (Annual review of university rankings « Ninth Level Ireland)
“EUA (the European University Association) has said it intends to publish an annual review of world university rankings. Given the growing number of league tables and rankings, national and international, and the impact of these on ‘decision-making and activities in universities across Europe’ EUA has, rather helpfully, decided to publish an annual review of university rankings …” (more)
So another ranking system is inevitably going to appear in the near future, based this time on the analyses from the EUA. See this blog for a partial listing of some of the differing systems, and see this blog for critical comment on the differing ranking systems.
1. Will anyone constructing any new ranking system pay attention to basic measurement theory (you know – the boring stuff like reliability, validity and all the other tedious stuff that tells you that a ranking relates in some way to the construct it purports to measure – the ‘quality’ of a university, however that is defined). Otherwise we are going to run the usual error of mistaking numbers for data.
2. How long can it be before someone conducts an inter-correlational analysis to see to what extent all of the differing ranking systems are actually measuring the same thing? This kind of statistical meta-analysis using all of the data from the various ranking systems [from relative web presence to citation measures to student employment to student satisfaction to grants gained to patents published to spin-outs started to teaching quality assurance to research assessment, etc., etc.] should reveal a statistical ‘positive manifold’ (i.e. they are all highly inter-correlated), if they are actually measuring the same thing. There’s a nice MSc thesis for someone to do – just credit this blog please! It certainly would be nice to see the EUA review attempt something quantitative of this type.
Update: have a look at this on on digital scholarship and rankings.