Publish and perish

This picture shows John P. A. Ioannidis, an epidemiologist © Severin Nowacki

Research funding organisations should place less importance on results and instead be more insistent about researchers making their data generally available, says the epidemiologist John P. A. Ioannidis. By Ori Schipper

​Horizons: Prof. Ioannidis, at a recent appearance before the National Research Council you made no bones about criticising the current scientific system. Is it in a state of crisis?

Prof. Ioannidis: You can’t put it in such general terms. Science is more productive than ever, but it’s suffering from a problem of credibility. Many published results are simply untrue, though it’s rare for them to have been falsified intentionally. It’s more common for their experimental design to be flawed. Or they might have used inadmissible statistical information. Not all scientific disciplines are equally affected by this problem, but every researcher should know the state of affairs in his own field. Some disciplines have improved how they monitor research and now produce credible, useful results. Other disciplines have made less progress with quality control. But if you don’t scrutinise your results, you can’t know if they’re right or wrong.

You are a researcher too. How have you come to question the very system in which you yourself participate?

For me, it’s not a matter of questioning the whole scientific system. I have simply come across problems and mistakes that are widespread in both my own work and that of colleagues. Most results in the biomedical field that have supposedly been proven to be statistically relevant have either been exaggerated or are simply wrong. For example, various hormone additives and vitamin additives in our food have claimed to possess curative properties or anti-cancer effects. But these claims don’t stand up to examination in larger-scale studies. So I began to carry out empirical evaluations to see how results are obtained, what they are, whether they are scrutinised and, if so, whether we find the same results when the tests are repeated. I’m not engaging in a fundamental critique. I just want to show where problems occur and how they can be rectified.

But these problems affect many other researchers too. Are you simply braver than the others, or are you more tenacious in being willing to speak out?

No, I don’t think that it’s got anything to do with courage. It’s more about my own research interests. Just as others are interested in the flight of birds or separation anxiety, I’m fascinated by questions about research. I’m open for discussion and have worked with over 2,000 other scientists. And I’m fully aware that mistakes are bound to lie dormant in my own work, too.

In your opinion, research funding organisations should expect results that are not so spectacular. Are you calling for more humility?

Yes. And even though we’re all interested in big discoveries, you can’t force them. Of course there are breakthroughs time and again. But if you plan your trials well and carry them out properly, then anything you discover has a better chance of being the real deal. If researchers risk not getting funding because they can’t promise important results, then there’s also a risk that they’ll claim to have made major findings even when their results are less significant. Research funding organisations should therefore place less importance on expecting results and more on the rigour of the methods employed and on a high quality of research. They ought to insist that scientists make their experimental data available and accessible to the public.

But it’s human nature to believe that what we do and support is significant and useful.

That may be, but the system should allow scientists to say: "I’ve worked hard and conscientiously, but nevertheless I’ve come up with nothing useful or practicable in recent years". In my opinion that would be very honest. As long as you’ve done your work well and correctly, then you shouldn’t be penalised for failing to produce spectacular results.

Do you see any signs that the scientific system is moving in a more honest direction?

There are reasons to hope so. No doubt most researchers accept that honesty is the best policy in the long run. And yet they are still subject to immense competitive pressure and have to assert themselves through their work. But the importance of their contribution should lie less in their results and more on criteria such as good experimental design and reproducibility, for these are at the heart of scientific endeavour.

More and more is being published all over the world, which is why it is increasingly difficult to monitor the quality of research across the board.

That’s true. But it doesn’t bother me that the number of publications is increasing. That’s actually a good thing. Scientific production shouldn’t be shrinking. The problem is that mistakes can spread once they’re published. For example, ten years ago the first studies were published in Europe and the USA that looked for disease
characteristics in our genetic makeup. They associated specific genes with smoking, depression, obesity or asthma. Only one percent of these findings were later confirmed in larger-scale studies. But today, 60% of the contributions to the literature on genetic meta-analysis come from China – and they are worthless because they make the same mistakes that we made in Europe and the USA, right at the start.

If mistakes are passed on, then they also multiply over time. Doesn’t the process of self-correction function properly any more in the sciences?

It functions, but what matters is how quickly. In olden times we used to think that the sun orbited the earth. We needed 2,000 years to correct this misconception. But today it’s problematic if it takes two years to discover a mistake and correct it, because there are far more scientists out there than ever before. They’re doing more research and publishing more – but they’re basing their work on the results of their predecessors and colleagues, some of which are actually wrong. My primary concern is to speed up the process of self-correction in the scientific system. This can only happen if results are checked swiftly and independently. In many fields, self-correction is not yet efficient enough. Until not too long ago, some disciplines, such as psychology, were not even bothered about the reproducibility of their results. They only recently began to repeat important trials.

Do you like Don Quixote?

Yes, he’s a wonderful fictional character. But I don’t think that he can serve as a role
model for me in my endeavours to bring about change and improvement. I’m trying
to be realistic. I see great potential for optimisation, and I’m pointing it out. Actually,
all scientists are a bit like Don Quixote: we pursue certain ideas, just like he did, and
we’re willing to fight for them too. But perhaps this fight should be less about defending our ideas. It should be more often about ensuring that our ideas get as close as possible to truth and reality, and that these ideas are as exact and as free of mistakes as we can make them. Whatever that reality is, we should try harder to understand it.

(From "Horizons" No. 100, March 2014)