Editorial / Analysis

Irreproducibility in Research. What can we do about it?

The team of Editors

 

We all would agree with Karl Popper‘s statement [1]:

Non-reproducible single occurrences are of no significance to science.

But what if a substantial percentage of published scientific facts are of the irreproducible category? Such an alarming scenario may be close to reality, according to a number of recent reports [2,3,4]. Indeed, some shocking statistics suggest that irreproducibility has gone awry in the last years. For instance, pharma and biotech companies can only reproduce between 11 and 25% high-impact research papers in the field of cancer research [5].Irreproducibility in Research. What can we do about it?

Irreproducibility is a growing concern among scientists [6]. Not only does it slow down the advance of science, but it can also undermine the support from society. Although scientists are generally considered as trustable, this image can be eroded by the perception that the majority of published scientific facts turn out to be irreproducible. We are already seeing signs of mistrust in the general media (we highly recommend the article Trouble at the lab published in 2013 by The Economist [7]). As these bad news spread, major journals and professional societies are devoting editorials and discussions to the problem of irreproducibility. Information about the topic is abundant and a vigorous debate is taking place in the scientific community (see section Discussion forums below). Biophysics is certainly not immune to this problem [8]. In this article, we discuss the potential sources of irreproducibility and propose some potential fixes.

At first sight, one might be tempted to associate irreproducibility with fraud, the latter being defined as dissemination of scientific facts even if the author is aware that they are not backed up by experimental evidence. However, although quantifying the extent of scientific fraud is difficult, the general consensus is that such a type of misconduct is quite rare and cannot be considered a major cause of irreproducibility. Instead, we identify two major sources for this problem:

  1. Inherent difficulty of the scientific enterprise. Science tackles challenging questions and hence mistakes can be made even by the most careful, best trained and honest scientist. This is particularly true in cases of strongly multidisciplinary sciences, like Biophysics. Such mistakes can lead to irreproducibility, but there is little that we can do about it. Even in the absence of mistakes, results can be irreproducible due to variables that are not under the control of the researcher. This is quite common in research that involves live organisms, such as bacteria, cell lines or animals, which are subject to variations due to adaptation to a particular lab, circadian rhythms, age, etc. Again, there is little that can be done to prevent irreproducibility due to uncontrollable variables. In addition, this sort of irreproducibility may indeed be positive, since it may eventually inform about the robustness of a finding (how the finding is independent of specific experimental variables) or pave the way to discover unexpected variables controlling the outcome of the experiment. For instance, the observation that the same strain of mice can have different immunological responses, depending on the geographical location of the laboratory, led to the identification of commensal microbiota as a key modulator of a subset of T-helper cells [9].

    Since we cannot avoid honest mistakes or uncontrollable experimental variables, is there anything that we can do to minimize irreproducibility that arises from the intrinsic difficulty of science? Can we, at least, do something to turn irreproducibility based on intrinsic complexity into positive scientific outcomes? Data repositories are already common for studies of structures of molecules. It may now be the time to universalize this requirement.

    In connection to this problem, there is the concern that a major source of irreproducibility is indeed the lack of detail in the experimental methods and conditions described in publications. Thus, it does make sense, if experiments are sophisticated and tricky, that we put a stronger effort into describing them very accurately in the methods section of scientific papers. Many journals are already implementing specific rules so that authors provide all the information that is needed to reproduce their results [10]. Moreover, in a context where digital information is easy to produce, store and disseminate, there is no excuse for all the actors involved (authors, journals…) to provide excruciating details about the materials and methods. Apart from that, there are cases where a detailed reporting of primary experimental results would facilitate reanalysis, using the same or alternative methods. Thus, journals should also implement repositories for all numerical, graphic and image data related to published work, and not only selected, summarized or conclusive data, as usually reported in article tables and graphs. Data repositories are already common for studies of structures of molecules. It may now be the time to universalize this requirement, although this obviously opens questions about standards and formats [8].

  2. Sloppy research. This reaches all corners of scientific research, including the quality of primary experimental data and subsequent analysis, and adequacy of use of methodology [11]. Some examples of sloppy research may even be qualified as misconduct. Actually, the limits between sloppy research and misconduct are blurry. For instance, cases of malpractice are, reporting results that the authors know that cannot be replicated consistently (without declaring it or without providing reasonable arguments which explain the reasons for the lack of replicability) or presenting results in a manner that masks potential flaws in the experimental design, so that they are unnoticed by reviewers.Thinking, questioning, discussing, criticizing and re-thinking are essential in science, but seem not to be acknowledged in today’s accelerated world of scientific discovery.

    Nevertheless, in most instances elements other than misconduct are responsible for sloppy research. Weak supervision by senior scientists, poor training of students, too much emphasis on shiny results, or hyper-competition within a publish-or-perish environment that fosters publication in high impact journals are some of the components leading to this severe problem. The common factor for all those cases is the lack of a critical approach to the scientific work: Thinking, questioning, discussing, criticizing and re-thinking (if needed) are essential activities in science. But they all consume time and effort, and seem not to be well acknowledged in today’s accelerated world of scientific discovery.

    Sloppy research may well be the leading cause of irreproducibility. However, relegating such practices and substituting them by slower and harder, solid and flawless work is not easy.

    A possible way to start is by improving the chances to identify sloppy research. This necessarily means improving reviewing of publications and valuing, as it deserves, the important contribution of reviewers. In fact, after accepting that modern multidisciplinary science is a very complex task (see previous point) and that a well done job needs to pay the price of time and effort, we also have to accept that good reviewing cannot be done without recognition, as it is in practice the case. This means that we need to reform the publication and reviewing system (perhaps to rethink it completely), to make possible that the best experts are willing to spend their precious time to evaluate the scientific work in sufficient depth, specially, but not exclusively, for publications in high-rank journals. This revision of the publishing system should be accompanied by other measures, like facilitating open and continuing post-publication review and stimulating criticism and discussion in scientific conferences.
    Students should learn that rigor is the correct way (even if not the shortest) to be competitive.

    The above measures should also be complemented with others that improve education and training of young scientists, both technically and ethically, which again seems not to be appropriately valued today. Such training should be included as part of the PhD and MSc programs. Perhaps more importantly, institutes and laboratories should recover critical thinking and discussions at all levels. We should also rescue the pride for training next generations of high-quality, rigorous scientists, over that of collecting high-impact publications. Students should be aware of what sloppy research means and how to avoid it. They should be instructed to be critical with their own work and with the work of others, and should learn that rigor is the correct way (even if not the shortest) to be competitive. This will not only create best scientists, but also finest critics, who will eschew sloppy research manners and take action whenever those are detected.

Finally, we want to make some specific comments about irreproducibility in Biophysics. In this field we develop or employ cutting-edge technologies to examine Biology, using approaches which may cover experiments and theory and often make use of living cells or animal models. Such a strong multidisciplinarity poses additional challenges, since it is not uncommon that biophysicists need to use highly specialized techniques on which they are not necessarily experts. Good examples are cases where there is a simultaneous need of hard core theoretical and experimental knowledge or cases where non-trivial statistics or other mathematical / computational methods are mandatory. Hence, the biophysics field is highly susceptible to irreproducibility, and we, biophysicists, should be well aware of that and do all we can to ensure that our research remains sound and solid. This includes consulting and/or collaborating with experts in the techniques we use, being open about our limitations and extra-critical with our results. Minimizing irreproducibility in our field is what we owe to the global scientific enterprise.

This all is certainly not an easy task. It will only work if it is actively promoted with appropriate incentives by funding and regulatory agencies, and with a minimum consensus within the scientific community. In order to stimulate and facilitate your participation in this timely and serious discussion, we leave the page open for your comments.
 

Leave a Reply

…please, use the Contact Form to send us your comments.

 

Discussion Forums

 

References

  1. Popper K “The logic of scientific discovery”. Hutchinson, 1959.
  2. Ioannidis JP. “Why most published research findings are false”. PLoS medicine, 2005, 2: e124. DOI: 10.1371/journal.pmed.0020124.
  3. Prinz F, Schlange T and Asadullah K. “Believe it or not: how much can we rely on published data on potential drug targets?”. Nat Rev Drug Discov, 2011, 10: 712. DOI: 10.1038/nrd3439-c1.
  4. Williams R. “Can’t get go reproduction: Leading researchers discuss the problem of irreproducible results”. Circ Res, 2015, 117: 667. DOI: 10.1161/CIRCRESAHA.115.307532.
  5. Begley CG and Ellis LM. “Drug development: Raise standards for preclinical cancer research”. Nature, 2012, 483: 531. DOI: 10.1038/483531a.
  6. Cicerone RJ. “Research Reproducibility, Replicability, Reliability“. A Speech to NAS Members, 2015, Annual Meeting, April 27, 2015.
  7. Trouble at the lab“. The Economist, Oct 19th 2013.
  8. Loew L, Beckett D, Egelman EH and Scarlata S. “Reproducibility of research in biophysics”. Biophys J, 2015, 108: E1. DOI: 10.1016/j.bpj.2015.03.002.
  9. Ivanov II, de Llanos-Frutos R, Manel N, Yoshinaga K, Rifkin DB, Sartor RB, Finlay BB and Littman DR. “Specific microbiota direct the differentiation of IL-17-producing T-helper cells in the mucosa of the small intestine”. Cell Host Microbe, 2008, 4: 337. DOI: 10.1016/j.chom.2008.09.009.
  10. Bolli R. “Reflections on the Irreproducibility of Scientific Papers”. Circ Res, 2015, 117: 665. DOI: 10.1161/CIRCRESAHA.115.307496.
  11. Vaux DL. “Research methods: Know when your numbers are significant”. Nature, 2012, 492: 180. DOI: 10.1038/492180a.