The Challenges of Reproducible Research

Twice last year, April and November, this blog addressed fake science.  That challenge has not gone away based on recent news articles.  But if one assumes the research is accurately reported and the independent research has differing results, the question is: “How can that be explained?”

One of the first considerations needs to be the equipment employed.  Ideally the research report will include the type (make and model number) of equipment employed in the experiments.  Equipment model numbers would provide a means of determining the capability of equipment identified to provide both accuracy and precision of results.  One could assume that this should be sufficient to get good correlation between experimenters.

One example of experimental correlation that I have close knowledge about was between researchers in Houston, Texas and Boston, Massachusetts.  The experiments involved evaluating properties of graphene. A very detailed test was conducted in the Houston lab.  The samples were carefully packaged and shipped to Boston.  There was no correlation between the two researchers on the properties of the graphene.  The material was returned and reevaluated.  The results were different from the original tests.  Eventually, it was shown that the material had picked up contamination from the air during transit.  This led to a new round of testing and evaluations.

The surprise was that the results from testing by the different researchers were close but not identical.  So, another search was started.  Equipment model numbers were checked, and then serial numbers were checked and sent to the equipment supplier.  It turned out that during the production run, which was lasted a number of years, a slight improvement was made in the way the equipment evaluated the samples.  This was enough to change the resulting measurements at the nanoscale.  Problem solved.

This example points to the need not only for calibration but also working with other researchers to determine discrepancies in initial research results.  Reports need to include test procedures and equipment employed.  Ideally, it should include some information on calibration.

In many cases, we take calibration for granted.  How many people assume that their scales at home are accurate?  How many check oven temperatures?  The list goes on.  I have been in that group.  We replaced our old range/oven this past December.  The oven appeared to be working fine until Christmas cookies were baked.  They came out a bit overdone.  The manufacturer indicates in the instructions that it may take time to make slight modifications in how you bake.  I decided to check the actual temperatures.  I found that a setting of 350oF resulted in an oven temperature of 350oF.  However, a setting of 400oF resulted in an oven temperature of 425oF, which is enough difference to over bake the cookies.  Almost weekly visits over two months by the manufacturer’s service technician has resulted in replacement of all replaceable part and a number of recalibrations with no apparent change in performance.  After the first few weeks, I decided to do a more thorough test of the oven.  (My measurements agree with the technician’s better equipment to within a couple of degrees.)  What I found is that at 350oF the oven is solid.  At 360oF, the oven temperature is almost 395oF.  At higher temperatures, the difference between the oven setting and actual reading decreases to 25 degrees.  Between 450oF and 500oF the temperature difference decreases from a plus 25oF to a minus 5oF.  The point of this example is that assuming calibration is accurate even for common appliances is not something to be taken for granted.  Calibration must be done, and calibration for scientific equipment must be done regularly to ensure accuracy of results.

Disagreement among researchers in test results does not indicate there are errors in the experiments, but that the equipment could be out of calibration.  It is critical to verify calibration of equipment employed in research before claiming faulty research.

About Walt

I have been involved in various aspects of nanotechnology since the late 1970s. My interest in promoting nano-safety began in 2006 and produced a white paper in 2007 explaining the four pillars of nano-safety. I am a technology futurist and is currently focused on nanoelectronics, single digit nanomaterials, and 3D printing at the nanoscale. My experience includes three startups, two of which I founded, 13 years at SEMATECH, where I was a Senior Fellow of the technical staff when I left, and 12 years at General Electric with nine of them on corporate staff. I have a Ph.D. from the University of Texas at Austin, an MBA from James Madison University, and a B.S. in Physics from the Illinois Institute of Technology.
Nanotechnology Risk Management, Science

Leave a Reply