In recent years, rankings, ratings and other quantitative indicators have become a more commonplace feature of research. To an ever greater extent, they determine who deserves support and who does not, which research has achieved excellence and which has not. While no one disagrees that excellence is central to research, there is not always a consensus on how to define it.
Perhaps the most widespread indicator in the research world is what is known as the impact factor. This is a figure that provides a straightforward indication of how often, on average, the articles published in one particular journal are cited in other articles. It therefore says nothing about the content or quality of a specific article, only something about the importance of the journal as a whole. So it's hardly surprising that ever more eminent researchers are casting doubts on the value of such indicators, especially where the evaluation of individual projects – or even researchers – is concerned.
As long ago as 2010, Swiss physical chemist and Nobel laureate Richard Ernst warned against the misuse of citation indicators and made suggestions for improvement*. He called for their use to be totally abolished, and for a publicly accessible website listing all institutions, journals and persons who continued to refer to such indicators in their evaluations: a sort of black list for the world of research, in other words.
Four years ago these measures seemed fairly extreme, but today they are gradually gaining majority support. At the end of 2012, for example, over 200 internationally renowned organisations and publishing houses came together in San Francisco to adopt a series of recommendations that go precisely in this direction. Their Declaration of Research Assessment, which is usually known under its acronym of DORA, has now been signed by over 10,000 individuals and more than 400 organisations worldwide. The SNSF became a signatory in June. The SNSF has thus promised that it will not use these indicators blindly when assessing future funding applications, but with a sense of proportion.
As President of the SNSF's National Research Council, I am convinced that the SNSF has taken a step in the right direction by signing this declaration. Isolated indicators cannot fully describe a researcher's importance, nor can scientific quality be determined by means of a single yardstick.
Martin Vetterli
President of the National Research Council of the SNSF
* The Follies of Citation Indices and Academic Ranking Lists, Richard Ernst, CHIMIA 64, 2010