“Many False Claims will End up Being Considered True”

(January 17th, 2017) In a recent article, a group of scientists lobby passionately for the publication of negative results. We spoke with first author, Silas Boye Nissen.





For most researchers - while a result that shows an effect is a reason to celebrate - results that detect no differences are seen as failures. Failure to find a solution. Failure to find a new treatment. Failure to find a way to make things work.

It shouldn’t be like that. A recent study published in eLife shows that negative results are just as valuable as positive ones to make advances in science. Using a mathematical model to mimic how scientific activity works, Silas Boye Nissen (photo), based at the Niels Bohr Institute, Denmark, and first author in this study, demonstrated the importance of negative results as a way to avoid false claims becoming an accepted fact. In other words, a few published results may seem to point in a certain direction, but what about the many more studies with a negative answer that simply remain hidden in lab notebooks and desk drawers? Don’t they deserve to see the light of day?


In every journal, positive results are the overwhelming majority of published papers. How do you think this need to always show positive results developed?

Given how strong the preference that authors, journals and readers have for positive results, it’s surprising that we don’t have a better understanding of the rationale behind this preference. There are a number of possible reasons. These include:

  • Positive results often indicate that the lab in question managed to get something - a treatment, a chemical reaction, a physical manipulation - to work. Negative results often indicate that it didn’t work. We often take the view that just about anyone could fail to get something to work in the lab, but it takes great expertise to get something to work. If you can’t get a positive finding from an experiment, it is likely that you are doing the experiment wrong or under the wrong conditions. However, if you get a positive result, your findings are not so readily dismissed.
  • People may feel that the vast majority of possible claims about the world are false: “Bananas are blue. Lead floats in water. Ducks combust when exposed to oxygen.” If so, anyone could come up with an endless list of things that aren’t true and you don’t learn much when you hear that some arbitrary claim is false. Coming up with a long list of things that are true would be much harder and you learn a lot, each time that a claim is shown to be true.
  • Positive results often correspond to practical or commercialisable discoveries. If I were to find that some rainforest plant cures some particular cancer, I would then have a treatment that I could bring to the clinic. If I were to find that it has no effect on cancer, I’ve got nothing useful. If I were to invent some next-generation cold fusion process that actually worked, I’d be sitting on the next “unicorn” start-up. If I found that my idea for this process didn’t work, I’d have nothing to pitch on Sandpoint Road.


Do you think this way of thinking was motivated by editors or researchers?

To some degree, editors must be responding to readers’ interests in their preferences publishing positive results, and authors must be responding to readers’ and editors’ interests in their preferences for writing up positive results. It’s harder to partition the blame. It would take a difficult and sophisticated study, to determine whether the current deficit of negative results is mostly because authors don’t even try to publish them, or mostly because editors don’t accept negative results, despite receiving them in plenty.


Despite this preference for positive results, your model shows how important negative studies are for science. Why the need to publish these negative results?

It’s essential to publish a reasonably large fraction of the negative results that are obtained, otherwise many false claims will end up being considered true. Scientists test claims by performing experiments and then try to publish their results in scientific journals. Researchers look at the accumulated set of published results - but not the unpublished ones - to decide what to believe about a claim. This way, a claim may accumulate enough positive support to become canonised as a fact, in the absence of negative findings to reject it as false.

For the parameter values we used in our model, this critical threshold tends to be in the range of 20-30 % of the negative results needed. In some fields, negative results may not be published this frequently. For example, one American study from 2008 looked at many clinical trials of SSRI antidepressants; they found that nearly all trials showing a significant effect were published but only very few of those showing no significant effect were published.


In your opinion, how can editors and researchers promote the publication of negative results?

We - the members of the scientific community - need to recognise how vital negative results are to the process of scientific inquiry. Indeed they may be as important, or even more important than positive results. We need venues where authors can publish negative results. Scientific journals founded specifically for this purpose (e.g. Biomed Central’s Journal of Negative Results in Biomedicine) may be an important part of the solution. Preprint archives such as the arXiv, that don’t require peer review, may be another possibility.

Simply publishing negative results is not enough, however. They need to be appreciated, by colleagues, by hiring committees, during the tenure process, and so forth. There is one situation in which negative results may be considered worthwhile. When a claim already has very strong support, a negative result from a high-powered and well-designed study can be highly influential and can also be comparably easy to publish. Our model suggests that this can help a great deal in decreasing the risk that we canonise false facts.


Do you think this idea that some facts are wrong can impact on the public's perception of science results? If so, what can scientists do to avoid this?

Properly explained, our results highlight a defining characteristic of science: everything we believe or purport to be fact is open to re-examination and subsequent reversal should the evidence merit that. Our model treats only the initial process by which facts are canonised. In the real world, when a false claim is mistaken as fact, enough contradictory evidence eventually builds up that researchers go back and re-examine the assumptions that they previously held as given.

We also don’t think that our results are particularly shocking. We suspect that much of the public already appreciates that not every small-scale fact discovered by science will ultimately stand the test of time. Our study suggests how the practices and incentive structures of science might be modestly revised to make science more efficient at sifting true claims from false ones. In the long-run, doing so would make science an even more powerful tool for understanding the natural world, and would thus bolster the public's confidence in scientific discovery.

Moreover, understanding how false canonisation can occur also highlights an important distinction of scale. On the one hand, we have highly technical scientific facts of narrow importance and supported by a modest handful of studies; these may occasionally warrant some scepticism. On the other hand, we have facts, such as the relation between smoking and cancer, or the reality of anthropogenic climate change. These are supported by hundreds or thousands of studies, upheld despite powerful moneyed interests seeking to undercut them, and now lie firmly beyond any reasonable dispute.


Alex Reis

Photo: Silas Boye Nissen




Last Changes: 02.10.2017



Information 4


Information 5


Information 6


Information 7


Information 8