Science Magazine

Demise of the Australian ERA Journal Rankings

Posted on the 03 June 2011 by Bradshaw @conservbytes

Demise of the Australian ERA journal rankingsEarlier this week Australian Senator Kim Carr (Minister for Innovation, Industry, Science and Research) announced the removal of the somewhat controversial ERA rankings for scientific journals.

Early last year I posted about the Excellence in Research for Australia (ERA) journal rankings for ecology and conservation journals. To remind you, the ERA has ranked > 20,000 unique peer-reviewed journals, with each given a single quality rating – and they are careful to say that “A journal’s quality rating represents the overall quality of the journal. This is defined in terms of how it compares with other journals and should not be confused with its relevance or importance to a particular discipline.”.

Now, after much to-ing and fro-ing about what the four rankings actually mean (A*, A, B & C), Senator Carr has announced that he’s dumping them under the advice of the Australian Research Council.

Seems like a lot of wasted effort, but I’m sure most would agree that no type of ranking system is perfect. But one is necessary, no matter which way you slice it. Currently, the standard benchmark of a journal’s ‘quality’ is measured by the ISI Impact Factor – a highly controversial and somewhat simplistic measurement that relates to the ‘average’ citation rate of papers within. What I liked about the ERA rankings was that it seemed to do a pretty good job of ranking the journals within a particular discipline.

For example, a common lament among marine scientists is that marine science journals tend to be poorly ranked according to citations (e.g., Impact Factors), yet there are clearly ‘high-impact’ and ‘low-impact’ journals within that discipline. It was interesting to note that one such journal in which my students and I often publish – Marine Ecology Progress Series – received an ERA ‘A’ ranking, which as far as I’m concerned, was a fair assessment of that journal’s ‘internal’ reputation. I’m sure there are many, many other examples within other disciplines.

So for me it was a way of vindicating some publication trends within particular disciplines. Despite any ranking system’s limitations, I felt ERA ranking roughly emulated my impression of journal quality and impact – something Impact Factors only do partially (if at all). Therefore, I have to admit I’m disappointed they’ve been dumped.

Saying that, it’ll be interesting to see how they will be replaced. I know that the ERA assessments of university quality were more or less based entirely on citation data, so I dare say any new iteration will be citation-based as well. Back to Impact Factors, I guess.

CJA Bradshaw


Back to Featured Articles on Logo Paperblog