Peer review under review

Online discussions, transparency, credits for experts: the scientific community is exploring ways to improve the peer-review procedure. By Sven Titz

(From "Horizons" no. 106, September)
Picture: © Keystone / Cultura / ISTL / Max Bailen

Cancer can now be combatted using a chemical substance derived from lichen. That’s the result of a study submitted to 304 journals two years ago by the science journalist John Bohannon, writing under a pseudonym. More than half accepted
the article for publication. But in October 2013, Bohannon admitted in the journal Science that it was a ‘spoof paper’, penned so as to expose journals. So to a large extent, the peer-review process had completely failed.

Complaints about shortcomings in peer review are as old as the process itself (see ‘Problems with peer review’). Falsifications are overlooked, original work refused, and shoddy work accepted. Some peer reviewers give full rein to their
own prejudices towards the background or gender of the authors. And last but not least, the often lengthy peer-review process eats up valuable time. Today, however, several new models and trends in peer reviewing promise to remedy the situation – or at least to provide some relief.

Digitisation has made possible a plethora of models for a transparent, discursive culture of assessment. Peer review is traditionally anonymous, but today reviewers have started putting their name to their reviews, and an increasing
number of new, interactive forms of discussion are being tried out across the whole process of publication.

A journal fond of discussion

A typical example of this development is the open-access journal Atmospheric Chemistry and Physics (ACP). ACP’s publication process has two steps. First, studies submitted are quickly checked for their plausibility and then placed
online in the forum ‘ACP discussions’. Besides the regular expert reviewers, other interested scientists may participate in the ensuing public debate provided they’re registered users. The authors’ answers are also published
straightaway. The expert reviewers take the whole debate into consideration when writing their reports. If the study survives this process, it’s taken up to the second level and published in the actual journal as a ‘final paper’.

An open-review process means several birds are killed with one stone, explains the chief editor, Ulrich Pöschl, an Austrian citizen who works at the Max Planck Institute for Chemistry in Mainz, Germany. New findings are not held up by a
slow peer-review process that can otherwise end in several rounds of reviews. Instead, the discussion papers go straight into scientific circulation. The ensuing interactive peer review then adds lustre to the ‘final papers’ that are
judged to be of higher quality. To Pöschl, the most important aspect is the post-evaluation step. New key figures are a real breakthrough in achieving better quality assurance, he says, referring to the frequency of downloads and of comments on articles. These new measurement parameters allow the journal to compete with the Science Citation Index, a well-known article database.

Meanwhile, 15 journals have come together under the auspices of the European Geosciences Union with a model similar to that of ACP. “We’ll see what the competition brings with it”, says Pöschl, with an eye on these other journals.

The pitfalls of transparency

Up to now, only a few journals have been working with an open peer-review process. It’s primarily the humanities and social sciences that prefer anonymous reviewing. “There’s a widespread tendency to more transparency, however”, says
Martin Reinhart, a Swiss sociologist and assistant professor for Sociology of Science and Research on Evaluation at the Institute for Social Sciences at the Humboldt University in Berlin. But he doesn’t just see this as something
positive. Transparency, he says, doesn’t automatically increase quality. When there is a mutual dependence between the reviewers and the authors of studies, there is a danger that people won’t be as critical as they need to be. This is
why he believes that anonymous peer reviewing should remain legitimate. Reinhart argues that it’s in the interest of science to have a great variety of peer-review systems.

On the PubPeer website, studies are sometimes subjected to harsh criticism

It’s not just the editors of journals who are trying out new models, but independent companies too. The Finnish start-up company Peerage of Science, for example, offers to take on the peer-review process for journals. One important
aspect of their system is ‘open engagement’, explains Janne Seppänen, one of its founders. The identity and competence of the reviewers is checked once, right at the beginning of the process. Afterwards, they can decide freely which of those studies submitted they would like to review – in other words, they’re not chosen by editors to review specific articles. Furthermore, the reviews themselves are assessed. “Of course, it’s important to ensure that this assessment is independent of the decision on the article itself”, says Seppänen.

At present, some 20 journals are participating in this model, most of them from the life sciences. In return, they get access to a pool of studies that have already been reviewed. All the journals of the Springer publishing house also have limited access to the pool. If the authors of a study are offered publication by a journal, they can either accept or refuse. The fact that several journals have access to the pool can improve the authors’ chances of publication.

Furthermore, they avoid their article having to go through several rounds of peer review – which could mean that their article might land with the same reviewer several times. The journals only have to pay if they accept an article.

The Peerage of Science model reduces the number of reviews needed. But the same effect can be achieved by other means. For example, articles are often rejected on formal grounds – such as for being too long, or because the focus of the
article doesn’t suit the journal to which it’s been submitted. In these cases, some editors hand over the reviews to similar journals. This practice was adopted by the association of journals Neuroscience Peer Review Consortium in 2007,
and it has been successful too. After assessment, it transfers some 200 reviews per year to other journals.

Open debate

Besides these attempts to reform the classical review process, there are more and more experiments taking place with a kind of peer review after publication. On the website PubPeer, for example, scientists exchange views about the value of different studies. “Very interesting discussions take place about the reliability of the research”, says Reinhart. Until now, these discussions had often taken place behind closed doors. Of course, such discussions can also get out of hand – a public platform can in principle also serve to discredit people, not least in the case of PubPeer, as it does not compel the participants to give their names. But Reinhart has the impression that the research community is able to address this problem itself through self-regulation.

Whereas studies are occasionally subjected to harsh criticism on PubPeer, things are less controversial on the platform Faculty of 1000 (F1000). One of the things it offers researchers in the life sciences is a kind of selection service. Exceptional articles are recommended on the platform by a fictitious ‘faculty’ of a thousand experts. This second level of peer review is intended to provide a guarantee that important articles won’t sink without trace in the current flood of publications.

What does the reviewer get out of this?

All these new variants have one thing in common: peer review remains dependent on the collaboration of the specialist community. Because the number of journals has increased with digitisation, editors get more and more refusals when
they ask a researcher for a review. One of the reasons for this is that reviewers get little recognition.

In principle, every science author profits from his or her peers and should at some point give back what he or she has gained, says Erik von Elm of the Institute of Social and Preventive Medicine at the Lausanne University Hospital. But
some don’t show solidarity and refuse to take part in peer reviewing. This is why incentives are needed. “What we still lack is recognition within the system for what reviewers do”. Up to now, he says, it’s been publishing articles yourself that’s been important in furthering your career.

“The system has weaknesses. But no one has yet invented a better one” Erik von Elm
In medicine, the problem has already been partly solved, says Ana Marusic, a professor at the School of Medicine at the University of Split, and a member of the Board of the European Associations of Science Editors. So-called CME points

(Continuing Medical Education points) are now being awarded for carrying out peer reviews. Medics have to collect a certain number of these points every year in order to keep their licence. But many other scientific fields lack such a
system.

It is possible that other initiatives might provide the answer here. Several journals publish an annual list of their best reviewers. Elsevier awards exceptional reviewers with certificates. And the reviews that appear on the F1000
platform have recently begun to be given the digital identifiers of the Open Researcher & Contributor Identification Initiative (ORCID). This ensures that the work of reviewers isn’t forgotten.

But what is also lacking is any kind of training to become a reviewer. Young scientists are often thrown into the deep end, writing their first reviews without any guidance at all. “At university, there are compulsory courses for
teaching, but not for peer reviewing”, says von Elm. Initiatives to remedy this are still few and far between. Basically, the peer-review system is like democracy, says von Elm. Everyone knows that the system has weaknesses, but no one has yet invented a better one.

Publishing on different levels

It was partly because of the problems with peer review that researchers in some fields began placing their studies on open publication servers several years ago. Researchers in physics, mathematics and data analysis have been avidly using the arXiv.org server to publish review-free studies since 1991, and since 2013 bioRxiv.org has served the same purpose in biology. They do this primarily because these servers allow for the swift exchange of information. Many of the studies archived there have later been published in reviewed journals.

According to Pöschl, it is already clear that we shall in future have three basic levels for scientific publications. First, there’ll be publication servers such as arXiv.org without any peer review. Second, there’ll be open-access
specialist journals such as BMC Medicine or ACP, which are characterised by transparency and a discursive culture. Third, there’ll be interdisciplinary journals of top quality such as Nature and Science. These last two will perhaps only serve as a showcase, their function being to boost studies of public relevance from levels one and two and to raise them up above the general mass of publications. Ultimately, what will count is the variety of publication models, says Pöschl, because they fulfil different tasks and complement each other.

The science journalist Sven Titz lives in Berlin and writes regularly for the Neue Zürcher Zeitung, the Tagesspiegel and Welt der Physik.