How is (Irish) University Quality Measured? And are Irish Universities Obviously of Such Low Quality?
There is a very provocative post entitled ‘Low Quality of Irish Universities Confirmed’ by Michael Moore at http://www.irisheconomy.ie/index.php/2010/01/05/low-quality-of-irish-universities-confirmed/.
There are a great many points worthy of commentary in the post. The essence of the post though is that Irish Universities perform badly or very badly on the annual Shanghai Jiao Tong University rankings, and Irish Universities (or two of them –TCD and UCD) perform well on another measurement system (I must presume it is the Times Higher Education Supplement – QS rankings; full disclosure – I have been an invited peer reviewer on several occasions for the latter). This is apparently a problem, as the JT rankings provide an objective measure of the quality of Irish (and other) Universities, which is ‘almost impossible to ‘game’’.
So how do the differing methodologies work?
The JT rankings are derived as follows (http://www.arwu.org/AboutARWU.jsp):
The Academic Ranking of World Universities (ARWU) is first published in June 2003 by the Center for World-Class Universities and the Institute of Higher Education of Shanghai Jiao Tong University, China, and then updated on an annual basis. ARWU uses six objective indicators to rank world universities, including the number of alumni and staff winning Nobel Prizes and Fields Medals, number of highly cited researchers selected by Thomson Scientific, number of articles published in journals of Nature and Science, number of articles indexed in Science Citation Index – Expanded and Social Sciences Citation Index, and per capita performance with respect to the size of an institution.
The THES-QS methodology is different, and its rankings are derived as follows
Academic Peer Review: Composite score drawn from peer review survey (which is divided into five subject areas). 9,386 responses in 2009 (6,354 in 2008).
40% Employer Review : Score based on responses to employer survey. 3,281 responses in 2009 (2,339 in 2008).
10% Faculty Student Ratio: Score based on student faculty ratio
20% Citations per Faculty: Score based on research performance factored against the size of the research body
20% International Faculty: Score based on proportion of international faculty
5% International Students: Score based on proportion of international students .
So the two ranking systems use very different methodologies and weightings to arrive at their rankings.
Some comments on the methodologies and weightings:
- JT provide no data on undergraduates as outputs, for example, whereas there is a huge emphasis given to employer reviews of undergraduates in the THES-QS survey (how can this be gamed?);
- Neither measurement system provides data on UG completion rates or employment salary uplift as function of institution;
- Neither measures UG or PG satisfaction;
- One is weighted very heavily in favour of prizes given by certain committees; the other ignores this entirely (although peer review might provide it by proxy);
- One places a major weighting on the editorial decisions of two major peer-reviewed scientific journals, the other does not;
- Neither measure web presence or associated web-traffic or international databases hosted;
- Neither measure total income per student or research income per student (neither do a normalisation of these measures against average national or international norms);
- Neither measure patents registered, companies spun-out, royalty streams from inventions.
So there appear to be many important variables missing from both methodologies (there are probably many others which I haven’t thought of).
I know of no data which suggests which system is more widely used than the other; my guess is that students and others proceed using either, both, neither, a cognitive heuristic such as availability, and perhaps even factors such as ease and speed of access to international student visas, and other aggregating websites that provide information on studentships, etc.
We are still left with the question of how quality is to be measured. The assumption underlying the irisheconomy post is that the JT method does measure quality, and the alternatives do not. How do we measure quality? Even more importantly, how do you define it, especially in the context of complex institutions, such as Universities? Measurement theory suggests that the methodology should have face validity, construct validity and predictive validity at a minimum. It is not at all clear how these criteria are met by either of the surveys.
There are other entirely citation-based measures available, and have been cited in previous posts here; on these the relative ranking of Ireland as a whole has improved quite a bit over the years. These measures can’t be gamed (but neither are they a sufficient measure of institutional quality).
Plumping for one over the other might simply reflect one’s own underlying prejudices more than anything else – being in an institution ranked at 43 is certainly more gratifying than being at one ranked somewhere in the mid-100s!