Frank Ziegele and Gero Federkeil29 July 2012 Issue No:232
In a recent article in University World News Phil Baty, editor of the Times Higher Education
World University Rankings, warned that rankings needed to be handled
with care. If we consider the impact international rankings have today,
we can only agree with Baty’s notion that “authority brings
responsibility”.
In more and more countries – Baty cited examples – a good league table position in the major global rankings plays a decisive role in policies of cooperation of universities with foreign institutions, as well as with regard to the recognition of foreign degrees and the portability of loans and scholarships.
These are clear signs of a dangerous overuse of rankings. No ranking has been introduced for these purposes and – hopefully – most producers of rankings would reject this role.
But we want to argue that ranking providers should not only object to misuses: it is more important to design rankings in a way that makes misuse difficult and guides users to apply rankings in an appropriate and meaningful way.
The ‘composite indicator’ problem
One of the major mistakes of rankings is the use of a ‘composite indicator’. A more or less broad variety of indicators is weighted and aggregated into an overall score for the whole university. One number is thus intended to measure the complex performance of a university!
If rankings provide information in this way, they seduce users into making decisions based on that one number. This is surely an oversimplification of quality in higher education.
Rankings can provide some quantitative information on particular aspects of the performance of universities – teaching and learning, research, international orientation and others. To do this, they have to focus on a limited number of selected dimensions and indicators, which means no ranking is able to reflect the full complexity of universities.
Some global rankings, which focus on reputation, measure nothing more than the strength of the universities’ global brand, which might not correlate to their performance. Yet their results are actually influencing that very reputation.
Other specialised rankings, for example Webometrics rankings, only measure the success of university policies in attaining web presence, but not their teaching or research performance. Despite this, the user is lured into believing s/he will be able to identify the best universities in the world with these kinds of rankings.
U-Multirank
How can we change this? The magic words are ‘multi-dimensional’ and ‘user-driven’ ranking.
The U-Multirank project, initiated by the European Commission, developed and tested the feasibility of such a system.
Different stakeholders and users have different ideas about what constitutes a high quality university and hence have different preferences and priorities with regard to the relevance of indicators. There are neither theoretical nor empirical arguments to assign a particular pre-defined weight to an indicator.
U-Multirank takes these points seriously by leaving the decision about the relevance of indicators to the users of the ranking. It presents a separate ranking list for every single indicator and suggests using an interactive internet tool, which allows people to choose the indicators that are most relevant to them.
Moreover, the set of indicators is not restricted to bibliometric research performance, but also includes dimensions such as teaching and learning, knowledge transfer, regional engagement and international orientation. This multi-dimensional approach is able to make the different institutional profiles and the particular strengths and weaknesses of universities transparent.
In combination with its grouping approach (building three to five performance groups instead of calculating a pseudo-exact league table), U-Multirank avoids the lure of oversimplification inherent in the attempts to crown the ‘best’ university in the world.
The provision of more differentiated and, admittedly, more complicated information decreases the pressure to change the methodology just to come up with a different list than in the year before. Since a major quality criterion for rankings is the stability of their methodology, this point further increases the value of the multi-dimensional approach.
The development of the U-Multirank model and the response to it from within higher education and among stakeholders has already stimulated a number of changes in the traditional global rankings. Some now also work on field-based rankings and some have started to include interactive elements to allow for user-driven elements.
However, they still stick to league tables and composite indicators instead of providing a really multi-dimensional and user-driven ranking. Let’s start the democratisation of rankings by leaving the choice completely to the user.
U-Multirank also looks for a broader and stakeholder-oriented approach in generating ranking data: the idea, which was tested in the feasibility study, is to combine international (bibliometric and patent) databases with the outcomes of institutional, student and alumni surveys.
This allows the comparison of, for instance, facts about study programmes (as U-Multirank provides a field-based ranking) with student satisfaction surveys, leading to a differentiated picture of performance.
If you only know the student-staff ratio, you can’t say if a high ratio means high quality in small groups or just a lack in demand due to bad quality. As soon as you can correlate the ratios to the students’ judgment of their contact with teachers, you will have a better impression of performance.
We have heard the objections against U-Multirank – “is this still a ranking?” or “will users understand this?” or “people still want to know who is number one!”
We would answer: as U-Multirank still shows vertical diversity by measuring performance, it is a ranking system. To make it understandable despite the complexity, the user-friendliness of the web portal will be of major importance. And, last but not least, we believe in intelligent users.
The next phase of the European Commission’s project has to demonstrate that all this can be implemented as a stable system.
* Professor Frank Ziegele is managing director and Gero Federkeil is manager in charge of rankings at the Centre for Higher Education Development in Germany.
In more and more countries – Baty cited examples – a good league table position in the major global rankings plays a decisive role in policies of cooperation of universities with foreign institutions, as well as with regard to the recognition of foreign degrees and the portability of loans and scholarships.
These are clear signs of a dangerous overuse of rankings. No ranking has been introduced for these purposes and – hopefully – most producers of rankings would reject this role.
But we want to argue that ranking providers should not only object to misuses: it is more important to design rankings in a way that makes misuse difficult and guides users to apply rankings in an appropriate and meaningful way.
The ‘composite indicator’ problem
One of the major mistakes of rankings is the use of a ‘composite indicator’. A more or less broad variety of indicators is weighted and aggregated into an overall score for the whole university. One number is thus intended to measure the complex performance of a university!
If rankings provide information in this way, they seduce users into making decisions based on that one number. This is surely an oversimplification of quality in higher education.
Rankings can provide some quantitative information on particular aspects of the performance of universities – teaching and learning, research, international orientation and others. To do this, they have to focus on a limited number of selected dimensions and indicators, which means no ranking is able to reflect the full complexity of universities.
Some global rankings, which focus on reputation, measure nothing more than the strength of the universities’ global brand, which might not correlate to their performance. Yet their results are actually influencing that very reputation.
Other specialised rankings, for example Webometrics rankings, only measure the success of university policies in attaining web presence, but not their teaching or research performance. Despite this, the user is lured into believing s/he will be able to identify the best universities in the world with these kinds of rankings.
U-Multirank
How can we change this? The magic words are ‘multi-dimensional’ and ‘user-driven’ ranking.
The U-Multirank project, initiated by the European Commission, developed and tested the feasibility of such a system.
Different stakeholders and users have different ideas about what constitutes a high quality university and hence have different preferences and priorities with regard to the relevance of indicators. There are neither theoretical nor empirical arguments to assign a particular pre-defined weight to an indicator.
U-Multirank takes these points seriously by leaving the decision about the relevance of indicators to the users of the ranking. It presents a separate ranking list for every single indicator and suggests using an interactive internet tool, which allows people to choose the indicators that are most relevant to them.
Moreover, the set of indicators is not restricted to bibliometric research performance, but also includes dimensions such as teaching and learning, knowledge transfer, regional engagement and international orientation. This multi-dimensional approach is able to make the different institutional profiles and the particular strengths and weaknesses of universities transparent.
In combination with its grouping approach (building three to five performance groups instead of calculating a pseudo-exact league table), U-Multirank avoids the lure of oversimplification inherent in the attempts to crown the ‘best’ university in the world.
The provision of more differentiated and, admittedly, more complicated information decreases the pressure to change the methodology just to come up with a different list than in the year before. Since a major quality criterion for rankings is the stability of their methodology, this point further increases the value of the multi-dimensional approach.
The development of the U-Multirank model and the response to it from within higher education and among stakeholders has already stimulated a number of changes in the traditional global rankings. Some now also work on field-based rankings and some have started to include interactive elements to allow for user-driven elements.
However, they still stick to league tables and composite indicators instead of providing a really multi-dimensional and user-driven ranking. Let’s start the democratisation of rankings by leaving the choice completely to the user.
U-Multirank also looks for a broader and stakeholder-oriented approach in generating ranking data: the idea, which was tested in the feasibility study, is to combine international (bibliometric and patent) databases with the outcomes of institutional, student and alumni surveys.
This allows the comparison of, for instance, facts about study programmes (as U-Multirank provides a field-based ranking) with student satisfaction surveys, leading to a differentiated picture of performance.
If you only know the student-staff ratio, you can’t say if a high ratio means high quality in small groups or just a lack in demand due to bad quality. As soon as you can correlate the ratios to the students’ judgment of their contact with teachers, you will have a better impression of performance.
We have heard the objections against U-Multirank – “is this still a ranking?” or “will users understand this?” or “people still want to know who is number one!”
We would answer: as U-Multirank still shows vertical diversity by measuring performance, it is a ranking system. To make it understandable despite the complexity, the user-friendliness of the web portal will be of major importance. And, last but not least, we believe in intelligent users.
The next phase of the European Commission’s project has to demonstrate that all this can be implemented as a stable system.
* Professor Frank Ziegele is managing director and Gero Federkeil is manager in charge of rankings at the Centre for Higher Education Development in Germany.
No comments:
Post a Comment