Monday, February 24, 2014

Journal rankings and Expectations

Publishing to me is getting the message out. I want to contribute to a better world, via my work. I do this by making methods more precise and by reducing error. You probably spotted that theme in my publications. The impact of my work is reflected by how often people reuse my work (extend, use, ...). A flawed reflection of that is the number of citations of my publications; better would be the number of projects that use my work. For example, tools that use the CDK (which has many more authors!), like Bioclipse, AMBIT,  LICCS (CDK in Excel), KNIME, and many more. Actually, the number of times these projects get cited, indirectly also contribute to the impact of my work.

"We" don't count that. Instead, "we" tend to focus on the Journal Impact Factor (JIF). There is plenty of material around that discusses how flawed that is in assessment, particularly for assessing scientists. But I rather explain now why I still think about it. Because it is part of how I, our research group, our research institute, and our university is assessed. I do not have to agree with the overhyped role of the JIF, but I cannot go around it either. Well, actually, I can to some extend.

At the moment, I have identified the following methods our work is being assessed, e.g. by national funding agencies and organizations that want to ensure good science in The Netherlands.
  1. papers in journals with a JIF >= 5
  2. papers in journal that rank in the Journal Citation TOP1, TOP10, and TOP25
I have seen worse. I have attended universities where the JIF is directly involved in the calculation. I have also seen better, and have attended universities that do actually count the number of times my papers are cited, and still proud to have five papers in the top 5% most cited papers.

Now, when using these guidelines practically, you found some interesting facts. For example, only very few journals are TOP1. In fact, Science is not; it is (JIF >=5 && TOP10). Not so many journals have a JIF >= 10, but >= 5 is not that uncommon. The biggest problem here is that this cut off is field specific, and >= 5 is trivial in biology, but hard in chemistry.

Combine that, you get a situation where many Open Access journals (and OA matters to me) are JIF >= 5 && TOP10, putting it, for the rankings, at par with Science.

Now, it gets better. Taking the above assessment rules into account, have a look at PLoS Biology. It has a JIF >= 5 (>= 10 even) and is TOP1! Thus, it outranks Science. Of course, there is also the implicit rule that Nature and Science go above all, but maybe the assessors should start to think about what they really want to care about, and what they really should be expecting from scholars.

Now, go read my latest preprint instead of wasting your time on rankings.