One reason that research data management has advanced in the priorities of the higher education sector in recent years is the “reproducibility crisis” in science. This has led many researchers and organisations to support a drive towards open science, or open research, where the research process becomes more transparent. This has included aspects such as the requirement from many funding bodies and publishers that underlying data be shared along with journal articles reporting on the results.
An interesting post in this area is Science is “show me,” not “trust me” which proposes a checklist for open science. See the original post for more detail and tips, but here’s a summary of what it suggests you should aim for:
- You did not rely on Microsoft Excel for computations.
- You scripted your analysis, including data cleaning and wrangling.
- You documented your code so that others can read and understand it.
- You recorded and reported the versions of the software you used (including library dependencies).
- You wrote tests for your code.
- You checked the code coverage of your tests.
- You used open-source software (or proprietary software with a really good reason).
- You reported all the analyses you tried (transformations, tests, selections of variables, models, etc.) before arriving at the one you chose to emphasize.
- You made your code (including tests) available.
- You made your data available (where legally and ethically permissible).
- You recorded and reported the data format.
- There is an open source tool for reading data in that format.
- You provided an adequate data dictionary.
- You published open access.
Would your research pass the Open Science Checklist? Do you think this is a useful list with achievable aims?
Public domain image from stocksnap.1