One reason that research data management has advanced in the priorities of the higher education sector in recent years is the “reproducibility crisis” in science. This has led many researchers and organisations to support a drive towards open science, or open research, where the research process becomes more transparent. This has included aspects such as the requirement from many funding bodies and publishers that underlying data be shared along with journal articles reporting on the results.

An interesting post in this area is Science is “show me,” not “trust me” which proposes a checklist for open science. See the original post for more detail and tips, but here’s a summary of what it suggests you should aim for:

  1. You did not rely on Microsoft Excel for computations.
  2. You scripted your analysis, including data cleaning and wrangling.
  3. You documented your code so that others can read and understand it.
  4. You recorded and reported the versions of the software you used (including library dependencies).
  5. You wrote tests for your code.
  6. You checked the code coverage of your tests.
  7. You used open-source software (or proprietary software with a really good reason).
  8. You reported all the analyses you tried (transformations, tests, selections of variables, models, etc.) before arriving at the one you chose to emphasize.
  9. You made your code (including tests) available.
  10. You made your data available (where legally and ethically permissible).
  11. You recorded and reported the data format.
  12. There is an open source tool for reading data in that format.
  13. You provided an adequate data dictionary.
  14. You published open access.

Would your research pass the Open Science Checklist? Do you think this is a useful list with achievable aims?

 

Public domain image from stocksnap.

1