Analysing the Rankings

I recently had the pleasure to record this podcast with Olivia Kew-Fickus, Chief Data Officer at Vanderbilt University. Amongst many other interesting points Olivia commented on this recent report which was commissioned by her university to have a look at university rankings methodologies. NORC, the organisation which undertook the work, is an independent affiliate of the University of Chicago. They really did have a good go at this and the product of their efforts is this very detailed and rigorous analysis.

The aim was to review the methodologies of five major league tables to assess whether they accurately capture the qualities that form an overall “ranking.”  The focus is on US rankings but the two major international league tables are also covered. NORC looked at the following rankings: U.S. News & World Report (USNWR), Wall Street Journal (WSJ), Forbes’ Top College list (Forbes), The Times Higher Education World University Ranking (THE), and the QS World University Ranking (QS).

Given that these rankings are used by just about everyone with an interest in higher education – prospective students, parents and sponsors, governments, funders and universities themselves – they really do matter and can have profound real world impacts on institutions. Their methodologies tend to be opaque though, change regularly (just to freshen things up) and have a number of other shortcomings highlighted in the report.

Key findings include:

  • Methodologies are unclear, and the rationale for the relative weights of various attributes included in rankings is unknown. The concepts captured by the rankings also are not clear. Researchers say this makes it impossible to know exactly what is being measured and how much it should “count” in a final assessment. 
  • Data quality is inconsistent, which hinders accurate assessments of various measures, even openly defined ones like graduation rates, student debt, and value-added earnings. 
  • Some factors assessed are highly subjective but are critical components in the ranking process, which makes it difficult to establish definitive comparisons between institutions.

This perhaps comes as no surprise to many. But the depth and rigour of the critique is rather refreshing.

There are sensible and measured recommendations for change which would, the authors say, significantly enhance clarity and transparency in the assessment and ranking process.

Assessing construct validity

The broad approach take by NORC was to assess the construct validity of this set of university rankings. They summarised this as follows:

Construct validity is a crucial property in the development of numerical scales, such as rankings, that aim to quantitatively describe otherwise abstract qualities of social entities such as ‘market value’ or ‘educational quality.’ Our assessment includes a review of conceptual, data, and methodological factors within this framework that are central to developing a methodologically sound ranking of colleges, drawing examples from our five illustrative ranking systems. 

On the back of this analysis then they make a number of recommendation which, in highly summarised form, include:

  • reconceptualization of ranking systems that clarifies the basket of well-identified concepts encapsulated in each ranking and whether they define or are defined by the choice of measures,
  • greater transparency about these possible data quality issues, and implementation of processes to improve data quality, where possible, including institutional audit mechanisms, more rigorous statistical correction procedures, and a review of data sources
  • weighting decisions are better justified from a measurement perspective, sensitivity of rankings to changes in such decisions are determined and ideally featured as an interactive tool
  • providers should either truncate ranks into tiers (as they already do in some cases) in a way that conveys meaningful differences or consider abandoning ranks altogether in favour of categorical ratings or other alternative systems.

The overall finding then is that many elements of the university rankings investigated lack construct validity:

This starts with the conceptualization of what is measured, and then extends to how concepts are measured in data elements drawn from various data sources, and finally to how the data/information are processed and presented using various methodologies. Improving college ranking systems will require addressing all these areas. However, there is a limited amount of improvement possible if the public and other stakeholders prefer ordered numerical ranks given the flaws. We believe there is a need for using data to explore these limitations, develop communication strategies at the appropriate level to inform the public of these issues, and examine approaches that might improve college rankings.

Hoping for impact

I do hope that this College Ranking Systems Assessment project does have some impact. And full credit to Vanderbilt for commissioning NORC to undertake such an impressive piece of work.

The recommendations made would undoubtedly significantly enhance clarity and transparency in the assessment and ranking process. And the idea of convening higher education experts and other stakeholders to establish common standards, measures, and definitions for future college rankings is welcome. However, I fear that it is unlikely that any of the major rankers will play ball – it’s not greatly in their interest to do so.

I do recall that part of the original motivation for the development of U-Multirank, the EU’s approach to ranking universities without ranking them, was to provide a healthy and wholesome alternative to the international league tables. It cost several million Euros to develop and maintain but never really caught on. It now appears as one of several tools hosted by the European Higher Education Sector Observatory and you can find it here if so minded. 

Nice as this may be as a benchmarking tool, it is very far from being the rankings-killer I’m sure many hoped it would be when it was launched.

Change may be in the air

There are though some grounds for optimism for those looking for alternatives. It seems that on the back of this report, NORC, Vanderbilt and other universities are looking to develop a new approach. One of the principal aims of the league tables is to provide prospective students in particular with the data they need to make informed decisions about their choice of university and course of study. Providing that information in a way which is personalised, objective and easy to use may develop into a serious and credible alternative option to the rankings. This group, which recently ran a seminar on this topic, is engaged in developing:

an alternative approach to providing students and their advisors with information to help their college search, drawing on existing and novel data sources and new technologies to create a personalized, engaging, and credible source for this important information. 

I can recall being involved a number of years ago in a similar effort involving a number of UK universities and a big tech company. The cost and effort at that time unfortunately seemed significantly to outweigh the benefit so it went nowhere. We are in different territory now though.

These rankings are now a deeply embedded part of the HE ecosystem. Before now the only way in which it looked like change might be possible is if every institution refused to participate. This was always a very unlikely scenario given that the sector in the UK rarely acts fully in concert and in the US is too large and fragmented meaningfully to move in unison.

This fresh approach does offer some hope of a meaningful alternative. However, rankings remain really big business for the companies which run them and are therefore not likely to disappear any time soon, no matter what the competition looks like. And, to paraphrase the late, great Douglas Adams, there is a theory which states that if ever anyone discovers exactly how the rankings work and why they are here, they will instantly disappear and be replaced by something even more bizarre and inexplicable. 

Whilst there are some grounds for optimism for future alternatives to the league tables, until they are fully developed the one thing I think we can be certain of is that rankers will rank, regardless.

Leave a comment