Wednesday, August 14, 2013

Competition and controversy in global rankings



Higher education is becoming more competitive by the day. Universities are scrambling for scarce research funds and public support. They are trying to recruit from increasingly suspicious and cynical students. The spectre of online education is haunting all but the most confident institutions.


Rankings are also increasingly competitive. Universities need validation that will attract students and big-name researchers and justify appeals for public largesse. Students need guidance about where to take their loans and scholarships. Government agencies have to figure out where public funds are going.

It is not just that the overall rankings are competing with one another, but also that a range of subsidiary products have been let loose. Times Higher Education (THE) and QS have released Young University Rankings within days of each other. Both have published Asian rankings. THE has published reputation rankings and QS Latin American rankings. QS’s subject rankings have been enormously popular because they provide something for almost everybody.

There are few countries without a university somewhere that cannot claim to be in the top 200 for something, even though these rankings sometimes manage to find quality in places lacking even departments in the relevant fields.

QS’s academic survey

Increasing competition can also be seen in the growing vehemence of the criticism directed against and between rankings, although there is one ranking organisation that so far seems exempt from criticism. The QS academic survey has recently come under fire from well-known academics although it has been scrutinised byUniversity Ranking Watch and other blogs since 2006.

It has been reported by Inside Higher Ed that QS had beensoliciting opinions for its academic survey from a US money-for-surveys company that also sought consumer opinion about frozen foods and toilet paper.

The same news story revealed that University College Cork had been trying to find outside facultyto nominate the college in this year’s academic survey.

QS has been strongly criticised by Professor Simon Marginson of the University of Melbourne, who assigns it to a unique category among national and international ranking systems, saying, “I do think social science-wise it’s so weak that you can’t take the results seriously”.

This in turn was followed by a heated exchange between Ben Sowter of QS and Marginson.

Although it is hard to disagree with Marginson’s characterisation of the QS rankings, it is strange he should consider their shortcomings to be unique.

U-Multirank and the Lords

Another sign of intensifying competition is the response toproposals for U-Multirank. This is basically a proposal, sponsored by the European Union, not for a league table in which an overall winner is declared but for a series of measures that would assess a much broader range of features, including student satisfaction and regional involvement, than rankings have offered so far.

There are obviously problems with this, especially with the reliance on data generated by universities themselves, but the disapproval of the British educational establishment has been surprising and perhaps just a little self-serving and hypocritical.

In 2011, the European Union Committee of the House of Lords took evidence from a variety of groups about various aspects of European higher education, including U-Multirank. Among the witnesses was the Russell Group of elite research intensive universities, formed after many polytechnics were upgraded to universities in 1992.

The idea was to make sure that research funding remained in the hands of those who deserved it. The group, named after the four-star Russell Hotel in a “prestigious location in London” where it first met, is not an inexpensive club: recently the Universities of Exeter, Durham and York and Queen Mary College paid £500,000 apiece to join.

The Lords also took evidence from the British Council, the Higher Education Funding Council for England, the UK and Scottish governments, the National Union of Students and Times Higher Education.

The committee’s report was generally negative about U-Multirank, stating that the Russell Group had said "ranking universities is fraught with difficulties and we have many concerns about the accuracy of any ranking”.

“It is very difficult to capture fully in numerical terms the performance of universities and their contribution to knowledge, to the world economy and to society,” the report said. “Making meaningful comparisons of universities both within, and across, national borders is a tough and complex challenge, not least because of issues relating to the robustness and comparability of data.”

Other witnesses claimed there was a lack of clarity about the proposal’s ultimate objectives, that the ranking market was too crowded, that it would confuse applicants and be “incapable of responding to rapidly changing circumstances in institutional profiles”, that it would “not allow different strengths across diverse institutions to be recognised and utilised” and that money was better spent on other things.

The committee also observed that the UK Government’s Department of Business Innovation and Skills was “not convinced that it [U-Multirank] would add value if it simply resulted in an additional European ranking system alongside the existing international ranking systems” and the minister struck a less positive tone when he told us that U-Multirank could be viewed as "an attempt by the EU Commission to fix a set of rankings in which [European universities] do better than [they] appear to do in the conventional rankings”.

Just why should the British government be so bothered about a ranking tool that might show European (presumably they mean continental here) universities doing better than in existing rankings?

Finally, the committee reported that “(w)e were interested to note that THES (sic) have recently revised their global rankings in 2010 in order to apply a different methodology and include a wider range of performance indicators (up from six to 13)”.

The committee continued: “They told us that their approach seeks to achieve more objectivity by capturing the full range of a global university's activities – research, teaching, knowledge transfer and internationalisation – and allows users to rank institutions (including 178 in Europe) against five separate criteria: teaching (the learning environment rather than quality); international outlook (staff, students and research); industry income (innovation); research (volume income and reputation); and citations (research influence).”

It is noticeable the Lords showed not the slightest concern, even if they were aware of it, about the THE rankings’ apparent discovery in 2010 that the world’s fourth most influential university for research was Alexandria University.

The complaints about U-Multirank seem insubstantial, if not actually incorrect. The committee’s report says the rankings field is overcrowded. Not really: there are only two international rankings that make even the slightest attempt to assess anything to do with teaching. The THE World University Rankings included only 178 European universities in 2011 so there is definitely a niche for a ranking that aims at including up to 500 European universities and includes a broader range of criteria.

All of the other complaints about U-Multirank, especially reliance on data collected from institutions, would apply to the THE and QS rankings, although perhaps in some cases to a somewhat lesser extent. The suggestion that U-Multirank is wasting money is ridiculous; €2 million would not even pay for four subscriptions to the Russell Group.

Debate

In the ensuing debate in the Lords there was predictable scepticism about the U-Multirank proposal, although Baroness Young of Hornsey was quite uncritical about the THE rankings, declaring that “ (w)e noted, however, that existing rankings, which depend on multiple indicators such as the Times Higher Educationworld university rankings, can make a valuable contribution to assessing the relative merits of universities around the world”.

In February, the League of European Research Universities, or LERU, which includes Oxford, Cambridge and Edinburgh, announced it would have nothing to do with the U-Multirank project.

Its secretary general said "(w)e consider U-Multirank, at best an unjustifiable use of taxpayers' money and at worst a serious threat to a healthy higher education system". He went on to talk about "the lack of reliable, solid and valid data for the chosen indicators in U-Multirank”, about the comparability between countries, about the burden put upon universities to collect data and about “the lack of 'reality-checks' in the process thus far".

In May, the issue resurfaced when the UK Higher Education International Unit, which is funded by British universities and various government agencies, issued a policy statement that repeated the concerns of the Lords and LERU.

Since none of the problems with U-Multirank are in any way unique, it is difficult to avoid the conclusion that higher education in the UK is turning into a cartel and is extremely sensitive to anything that might undermine its market dominance.

And what about THE?

What is remarkable about the controversies over QS and U-Multirank is that Times Higher Education and Thomson Reuters, its data provider, have been given a free pass by the British and international higher education establishments.

Imagine what would happen if QS had informed the world that, in the academic reputation survey, its flagship indicator, the top position was jointly held by Rice University and the Moscow State Engineering Physics Institute (MEPhI)! And that QS argued this was because these institutions were highly focused, that they had achieved their positions because they had outstanding reputations in their areas of expertise and that QS saw no reason to apologise for uncovering pockets of excellence.

Yet THE has put Rice and MEPhI at the top of its flagship indicator, field- and year- normalised citations, given very high scores to Tokyo Metropolitan University and Royal Holloway London among others, and this has passed unremarked by the experts and authorities of university ranking.

For example, a recent comprehensive survey of international rankings by Andrejs Rauhvargers for the European University Association describes the results of the THE reputation survey as “arguably strange” and “surprising”, but it says nothing about the results of the citation indicator, which ought to be much more surprising.

Let us just look at how MEPhI got to be joint top university in the world for research influence, despite its lack of research in anything but physics and related fields. It did so because one of its academics was a contributor to two multi-cited reviews of particle physics. This is a flagrant case of the privileging of the citation practices of one discipline which Thomson Reuters andTHE supposedly considered to be unacceptable. The strange thing is that these anomalies could easily have been avoided by a few simple procedures which, in some cases, have been used by other ranking or rating organisations.

They could have used fractionalised counting, for example, the default option in the Leiden ranking, so that MEPhI would get 1/119th credit for its 1/119th contribution to the Review of Particle Physics for 2010. They could have excluded narrowly specialised institutions. They could have normalised for five or six subject areas, which is what Leiden University and Scimagodo. They could have used several indicators for research influence drawn from the Leiden menu.

There are other things they could do that would not have had much effect, if any, on last year’s rankings, but that might pre-empt problems this year and later on. One is to stop counting self-citations, a step already taken by QS. This would have prevented Alexandria University getting into the world’s top 200 in 2010 and it might prevent a similar problem next year.

Another sensible precaution would be to count only one affiliation per author. This would prevent universities benefitting from signing up part-time faculty in strategic fields. Something else they should think about is the regional adjustment for the citations indicator, which has the effect of giving universities a boost just for being in a low-achieving county.

To suggest that two universities in different countries with the same score for citations are equally excellent – when, in fact, one of them has merely benefitted from being in a country with a poor research profile – is very misleading. It is in effect conceding, asJohn Stuart Mill said of a mediocre contemporary, that its eminence is “due to the flatness of the surrounding landscape”.

Finally, if THE and Thomson Reuters are not going to change anything else, at the very least they could call their indicator a measure of research quality instead of research influence. Why should THE and Thomson Reuters have not taken such obvious steps to avoid such implausible results?

Probably it is because of a reluctance to deviate from their InCites system, which evaluates individual researchers.

THE and Thomson Reuters may be lucky this year. There will be only two particle physics reviews to count instead of three so it is likely that some of the places with inflated citation scores will sink down a little bit.

But in 2014 and succeeding years, unless there is a change in methodology, the citations indicator could look very interesting and very embarrassing. There will be another edition of the Review of Particle Physics, with its massive citations for its 100-plus contributors, and there will be several massively cited multi-authored papers on dark matter and the Higgs Boson to skew the citations indicator.

It seems likely that the arguments about global university rankings will continue and that they will get more and more heated.

No comments:

Post a Comment