Quantifying science - the SNSF signs DORA

​In recent years, rankings, ratings and other quantitative indicators have become a more commonplace feature of research. To an ever greater extent, they determine who deserves support and who does not, which research has achieved excellence and which has not. While no one disagrees that excellence is central to research, there is not always a consensus on how to define it.

Perhaps the most widespread indicator in the research world is what is known as the impact factor. This is a figure that provides a straightforward indication of how often, on average, the articles published in one particular journal are cited in other articles. It therefore says nothing about the content or quality of a specific article, only something about the importance of the journal as a whole. So it's hardly surprising that ever more eminent researchers are casting doubts on the value of such indicators, especially where the evaluation of individual projects – or even researchers – is concerned.

As long ago as 2010, Swiss physical chemist and Nobel laureate Richard Ernst warned against the misuse of citation indicators and made suggestions for improvement*. He called for their use to be totally abolished, and for a publicly accessible website listing all institutions, journals and persons who continued to refer to such indicators in their evaluations: a sort of black list for the world of research, in other words.

Four years ago these measures seemed fairly extreme, but today they are gradually gaining majority support. At the end of 2012, for example, over 200 internationally renowned organisations and publishing houses came together in San Francisco to adopt a series of recommendations that go precisely in this direction. Their Declaration of Research Assessment, which is usually known under its acronym of DORA, has now been signed by over 10,000 individuals and more than 400 organisations worldwide. The SNSF became a signatory in June. The SNSF has thus promised that it will not use these indicators blindly when assessing future funding applications, but with a sense of proportion.

As President of the SNSF's National Research Council, I am convinced that the SNSF has taken a step in the right direction by signing this declaration. Isolated indicators cannot fully describe a researcher's importance, nor can scientific quality be determined by means of a single yardstick.

Martin Vetterli
President of the National Research Council of the SNSF


* The Follies of Citation Indices and Academic Ranking Lists, Richard Ernst, CHIMIA 64, 2010

 

 

 

1 comment

  • Dominik Moser

    13 August 2014 09:37:04

    Impact factors certainly have their problems, particularly as they measure the journal more than the researcher. This said, I feel that a lot of the critics of such measures would like a world where scientific value measured by generally quantifiable and understandable criteria are no central criteria for advancement at all, and the decision process is more flexible. I have to caution that a world where that is the case, is also a world where petty politics have even more power in the appointment process than they already have these days. In such a world any finding comission can easily find arguments to do whatever they want, no matter whether it is promoting the actually best researcher, just promoting the researcher who happens to share the opinions of the comission president, or promoting the researcher who happens to be politically convenient. From my personal experience in appointment comissions, the danger that this advances cronyism is very real.

Your comment

The editors reserve the right not to publish comments or to abridge them.

*Required fields