UK Higher Ed loves metrics. REF. TEF. KEF. QS rankings. (I’ve written before about other types of rankings we might consider…)
Of course, there is no easy way to collate and evaluate the data necessary to make these different type of ranking systems robust. The QS rankings are made by surveying academics about their perceptions of other departments. I received a request to participate in this manner a few years ago, took a look at the survey, realised how entirely inadequate my knowledge was to allow me to give informed responses, and promptly decided to never contribute to these surveys ever again. Plenty has been said about how graduate earnings and employability are no indication of the teaching quality of the course the person graduated from, and neither is student satisfaction, and yet these are all factors that are taken into account in the TEF. I don’t know enough about KEF to say anything about the methods it uses, but I’m sure they’re just as problematic.
But in this post, I want to talk about REF and the proxies it uses.
The point of REF is to grade the research outputs of individual departments as a means of determining how to allocate money to departments, rewarding ones that are good and punishing ones that are…