A leading QA expert discusses why the impact of scientific research can only go as far as its ability to be reproduced.
Successful scientific research requires an enormous investment of resources, education and effective mentoring. Scientists must be innovative, organized, flexible and patient as they conduct their research. Those entrusted to contribute to the research body of knowledge also rely on a support structure that recognizes and accepts the role of setbacks in the discovery process. In scientific research, three steps forward may rapidly result in two steps back. However, that single step forward may be the one that changes everything. The ‘negative’ (but accurate and reproducible) results generated by one group today, may be the critical data needed to clarify uncertainty for another group tomorrow. Nearly all research data are important, and should be presented for review in order to move the collective effort forward. This comprehensive cohort of research data shared throughout the research enterprise is needed to fuel scientific progress. However, if the quality of the data is poor and unrecognized, progress will be impeded- sometimes for years to come. Therefore, scientists conducting (and reviewing) research must be fully equipped with the appropriate tools, training and expertise to ensure and evaluate data quality in order to confidently advance or strategically retreat in response to research outcomes.
The complexity of scientific research is rapidly expanding. Hopefully, with increasing access to compelling new evidence and rapidly advancing technical tools, we will discover novel solutions to challenging and critical problems. Alternatively, depending on the quality of the data, we may also encounter disappointing setbacks resulting from false starts, dead ends or irreproducible results. While false starts, unexpected failures and data detours may eventually contribute to positive research outcomes, irreproducible research rarely delivers any measurable return on investment. The frequent (and oft-reported) inability to reproduce scientific research data is a major disappointment to scientists, institutions, funding and publishing agencies, as well as the general public. Irreproducible research is recognized as a critical impediment to our scientific progress and thoughtful people are working on thoughtful solutions.
Strategies to improve the way that scientists design, propose, describe and report research studies have been initiated by entities that fund and publish research effort. The National Institutes of Health (NIH) hopes to influence research rigor (and enhance reproducibility) by providing new training opportunities and establishing new grant proposal submission requirements designed to improve the design, description and methodology of the research they fund. Similarly, scientific publishers have much at stake as they facilitate the flow of research data throughout the research network. In support of research reliability, they are collaborating to improve scientific submissions, encourage data sharing and enhance the potential for successful research replication.
Academic Institutions are also being called upon to address concerns associated with irreproducible research outcomes. In the United States; most graduate training programs include mandatory instruction on the ethical conduct of research and the humane treatment of animals. In addition, mentoring programs are designed to offer a robust and rigorous introduction to research conduct. However, additional scientist support may be required as the impact of irreproducible research grows. The science of today is very different than the science of the past, and all participants are struggling to keep pace with the opportunities and challenges presented. The complexity and scope of data (big-data, meta-data), and the expectations associated with these data (shared-data, cloud-data, private data, public data) require that scientists attain an entirely new level of data literacy. Multiple strategies are required to ensure that scientists have the skills and infrastructure they need to conduct their important research and manage their complex data.
One proposed strategy is to integrate and implement frequently recommended (but rarely adopted) principles of research quality assurance (QA) into academic research environments. Quality Assurance systems are typically used to support the quality and reconstruction of data in manufacturing or regulated research environments. They are established to address the processes by which scientific data are generated, collected and used. Quality Assurance systems are implemented in order to provide assurance that data are fit for their intended purpose, and that the processes under which they have been generated are accountable and transparent. This is achieved by ensuring that appropriate records (for example: training, equipment, procedures (standard operating procedures), specimens, reagents, supplies, facility and data management) are maintained so that the work can be recreated if necessary to answer questions about data accuracy or reliability. Quality Management Systems (QMS) are quite rigorous in regulated research environments; however, core principles of these systems could be strategically (and more simply) adopted in order to design a program with an appropriate and sustainable scope for application in the basic research environment.
Quality Assurance training programs in academic environments are rare, even though the adoption of simple, sustainable and risk-based research QA best practices (sometimes called ‘Good Research Practices, GRP) have been recommended for many years. As a result of this unfortunate gap, most scientists remain unaware of how QA could improve research documentation and increase the likelihood for accurate data reconstruction (which should also improve research reproducibility). In addition, graduates leave their training institutions unequipped and under-prepared to transition quickly into careers where research is routinely conducted within environments where robust QMS are in place.
Integrating research QA best practices within basic research environments will take time, expertise, resources and support. However, the time is right for the development of effective training and implementation models that address data quality and data literacy. These models need to be voluntary, sustainable, science-centered, and risk-based to ensure that they add value (and not bureaucracy) to the user. The documentation and records that are generated through the implementation of research QA best practices will provide credible evidence that data are accurate, reliable and can be reconstructed. Training in research QA will complement the new initiatives related to research premise, design, reagent characterization and bias being developed by the NIH, as well as those related to data sharing and reporting as initiated by publishing agencies.
Increasingly, scientists must be able to demonstrate that their data are accurate and repeatable in order to ensure efficient use of scarce resources, promote the quality of their research, and attract and warrant continued funding. Academic institutions have the opportunity to promote scientific excellence and improve research training by introducing innovative QA and data literacy programming as promising institutional best practices. If the Institutions fail to do so, or respond too slowly, scientists should adopt research QA best practices on their own (using currently available resources) to showcase the quality of the important work they do.
Rebecca Davies, PhD is an Associate Professor at the College of Veterinary Medicine and Director of Quality Central at the University of Minnesota. She earned her Ph.D in Comparative Animal Physiology from the University of Minnesota. Dr. Davies teaches veterinary endocrinology within the College of Veterinary Medicine Core Curriculum and is the faculty adviser for the Comparative Immunology and Endocrinology Laboratory.
This article is also published on the OUP blog.