Yearning for slow science

Ever more quantity, ever more frequency, but also ever more unreliability? Is science in crisis? Some researchers are urging us to take the pace of things down a notch. By Roland Fischer

(From "Horizons" no. 106, September)
Picture: © Keystone / Branko de Lang

The credit crunch! The housing crisis! The Greek crisis! But is there perhaps also a science crisis? Science is supposed to be an engine of success, but we’re hearing more and more that it’s running into major problems: scandals, data manipulation, downright fraud, and a publications roundabout that’s turning ever faster. Could it be that there’s something fundamentally wrong with the world of science today?

One thing is certain; scientific production has skyrocketed. The number of research publications has been growing exponentially – from roughly 700,000 per year in 1990 to 1.3 million in 2006. And the attention received by each publication is dwindling accordingly. Furthermore, in 2014 alone, some 400 articles had to be withdrawn after publication, because they contained sloppy work. At the beginning of the millennium, that figure was still only 30.

John Ioannidis, the bad boy of science statistics at Stanford University, demonstrated plausibly in 2005 that more than half of all published findings are wrong. And in 2014, he estimated that across the whole world, some 85% of research subsidies – USD 200 billion – were being invested in bad research and thus wasted. Perhaps the most disturbing warning sign is that more and more research results get past all quality controls, but then can’t be reproduced by other researchers. Here, too, spot checks have shown that in many research fields, only a minority of results are based on solid work.

This calls into question one of the theoretical foundations of the natural sciences in particular: the reproducibility of a result, independent of place, time or person. Ultimately, this notion of reproducibility is the bedrock upon which we found all our claims to anything along the lines of objective ‘truth’. If cracks appear in this, then it’s understandable to fear that the whole edifice could collapse around us.

Faulty mass production

So is science today producing only background noise instead of clear signals? In many fields, this truly seems to be the case – and the exponents themselves don’t hesitate to admit it. Until recently, Peter Jüni was the Head of the Clinical Trials Unit at the University of Bern, and he estimates that some 80 to 90% of current studies in the health sector are too small in scale, and/or suffer from methodological deficiencies that make them essentially unusable. But he would prefer to view this from a different perspective. Within this flurry of research results, he says, at least 10 to 20% of findings provide a substantial impetus to the field. And this is an “immense gain” compared with 1950, when “our medicine was often a kind of voodoo”. Jüni still sees a certain degree of “naiveté in the medical research community”, which lets itself be too easily deceived by the supposed significance of research results. But he doesn’t see this as a fundamental problem: “If you know what you’re doing, you will still find your way easily amidst this barrage of activity”.

Antonio Ereditato is a professor for experimental particle physics and the spokesman of the Opera team at CERN, and he has also had to deal with the hazards of science. Three years ago, he announced a sensational discovery. Neutrinos had been observed travelling faster than light. The news went through the international media like wildfire. Ereditato stresses that his team had always been very specific about calling this result an ‘anomaly’, and had published their findings as a pre-print article on the arXiv server. The correction came eight months later. The measurement had been a result of faulty equipment. Even with hindsight, Ereditato still thinks that the Opera team acted correctly – they had waited for a long time to go public, and had also eventually only done so in order to invite colleagues to discuss this “rather improbable event”.

For Ereditato, it’s quite normal that experimental findings can’t always be reproduced. He thinks that the publication of research results should always follow strict statistical rules and should be labelled accordingly – for example, as ‘indications’, ‘proofs’ or ‘discoveries’ – according to the quantitative reliability of the data. Dealing with the complexity of data is a normal part of research activity in particle physics, he says.

Like Jüni in his field of medicine, Brian Nosek doesn’t believe that all fields of research are equal in the reliability of their results. He is a professor of psychology at the University of Virginia, and in 2013 he founded the Center for Open Science. Recently, he set up the ‘Reproducibility project: psychology’, in order to keep a check on his own field. He believes that the problem lies in “hyper-competition” and false incentives. “As a researcher, you’re not

rewarded for proving the reproducibility of the results you’ve achieved. It’s far better for your career to produce as many results as possible and to publish them”.

Creating the right incentives

And so people happily keep on publishing, ever more and ever more often. The number of publications has been growing exponentially. Lutz Bornmann is a sociologist of science at the Max Planck Society in Munich, Germany, and together with Rüdiger Mutz from ETH Zurich, he has recently shown that the number of sources cited in publications has also been growing exponentially, and that since the 17th century, the growth rate has jumped considerably on three occasions. Today, the volume doubles every nine years. Whether this generates a similar growth of knowledge itself is something on which the empiricist Mutz prefers not to comment. “You’d first have to determine criteria by which to measure it”.

Nosek has nothing against growth per se. But transparency and reproducibility should be rewarded, not just quantity. The isn’t just going to disappear. And other efforts at reform have taken this as their starting point, too. For example, one initiative that has recently acquired a certain level of popularity is the San Francisco Declaration on Research Assessment, or DORA. It aims at a state of affairs in which research is evaluated by placing a greater emphasis on the quality of each individual project instead of on the indices of the scientific journals in which the results are published.

In the Netherlands, a number of renowned researchers have called for a ‘Science in transition’ that is intended to be nothing short of a fundamental reform of science. Science, they say, has been reduced to a self-referential system in which quality is determined almost solely by bibliometric parameters, and in which societal relevance is not emphasised enough. The European Commission welcomed the Dutch initiative and, after a process of consultation, it recently proposed guidelines for an ‘open science’ that is intended to be both more transparent and better anchored in society; digital means are to be utilised to achieve these goals. It is also hoped that the initiative will help us to keep up with the current exponential growth, and in the process attain quicker, more efficient means of knowledge production.

Slowing down

But do we really need to go even quicker? There’s a growing resistance to this. In analogy to the slow food trend that aims to increase our enjoyment when we eat, a ‘slow science’ movement has taken off in recent years, with manifestos and articles appearing in different countries arguing for a more cautious, more sedate approach to science. However, there’s no real unity about what such slow science would essentially entail. None of its exponents yearns for a nostalgic return to some putative perfect world of yore. What’s certain, however, is that many researchers today feel that they can’t fulfil their proper mission any more. Ulrike Felt, an Austrian social scientist at the University of Vienna, believes that this is an expression of a phenomenon that’s taking place in society as a whole: it’s a different approach to time itself. “It is fundamentally a lack of time that creates a feeling of crisis”, says Felt. In recent decades, our temporal structures have changed, she says, and this is perceived in the form of stress and acceleration.

This is also something that has been observed by Fortunato Santo from the University of Helsinki, whose group recently published the article ‘Attention decay in science’. They claim that research projects are being forgotten at an ever quicker rate because they are quickly submerged by the next wave of publications. Santo would also like research authorities to change their views, and he would like ways and means to be found so that we might place quality above quantity once more. Felt makes a more general observation, namely that politics should also be about tending the
temporal landscape. There is too little time for reflection today, and knowledge production suffers consequently from the loss of our ability to spend longer periods of time on one task.

Just how the goal of slow science might be achieved, however, remains unclear. In this regard, Ereditato poses a fundamental question, “even if we ultimately decided that we need a slower science, where’s the brake?

Every result is a publication

“Stories can wait. Science cannot”. This motto sums up a plan to revolutionise the ways and means of publishing scientific results. The platform ScienceMatters has been devised by Lawrence Rajendran, a systems biologist at the University of Zurich, and it’s due to come online in September 2015.

The idea behind this completely digital network is that researchers should no longer wait until all the individual elements of their work come together to produce a complete picture, or until they can derive a neat conclusion from it. The individual components of an article – in other words, individual observations – should be placed before the international research community for them to examine. The researchers could then get valuable feedback from other experts while they are still busy with their actual research. This should allow them to bring their scientific ‘story’ to its conclusion in peace, and with much better arguments.

Rajendran also hopes that this will help to counter scientific misdemeanours. Researchers would feel less tempted to squeeze data to fit their arguments, even when it doesn’t really do so properly.

Publishing on ScienceMatters will be as easy as registering with a Facebook profile. Rajendran believes that there is a large, potential pool of able researchers in developing countries who could contribute individual components to a large digital science network such as this. Others will perhaps write the ‘story’, but even people without a university degree could help with data collection. ScienceMatters could thus contribute to a diversification of the research profession – and also lead to better reproducibility. “Many scientists are good at seeing the big picture and are born discoverers, while others are meticulous in checking the work of others. Everyone should do what he or she does best – and get proper recognition for it”.

Quality control is to be organised accordingly. Just as in a social network, everyone will be able to like, evaluate and comment on contributions and thereby influence the status of the user. In this manner, important observations will reliably float to the surface, believes Rajendran. The only ‘upstream’ measure will be a check, carried out by the editorial team, that will weed out everything that does not meet the necessary standards.