Portal/TA1/guidelines/referees

From MaRDI portal
Revision as of 14:11, 11 October 2022 by JHanselman (talk | contribs) (Made a page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Guidelines for peer reviewing software

Here is a list of potential criteria one could look at when reviewing mathematical papers with a software component. This list contains mostly of ideas. Only through sufficient experimenting and collaboration with journals, editors, conferences and researchers will we be able to determine suitable standards that are not too taxing on authors, but do increase the reliability of the code and ability of future researchers to reuse published software.

Importance of the code in the publication

  • The paper only uses a small bit of code for simple computations
  • The results of the paper depend heavily on computations
  • The paper develops new algorithms and the code is part of the publication
  • The data output is part of the publication

Availability of the code

  • Code unavailable: No link to the code is provided and the code can also not be found on the website of the author.
  • Code is available on the website of the author.
  • Code is available on a long term storage solution. (e.g. Github, Zenodo)
  • Code is available on the website of the journal.

Files provided

  • Notebook (Jupyter, IPython, Sage notebooks,etc.)
  • Source Code
  • Example files and documentation
  • Computed data
  • Code that verifies computed data

Licensing

  • Is there a license for the code. If yes, which one?

Reproducability of the code and ease of installation

  • Specifications of hardware used to run the code in a reasonable amount of time
  • Installation instructions available?
  • Docker file or virtual machine included?
  • Specification of the datasets used (including links, versions)
  • Specification of dependencies on algorithms developed by others
  • Hardware and software environment used
  • Instructions for repeating the computations performed to obtain the results in the paper.
  • Documentation and examples

Correctness and reliability

  • Does the code give the results listed in the paper?
  • How does the code react to different examples other than the ones listed in the paper?
  • Computing the data/performing the same experiments using different software packages increases the reliability of the data.
  • Comparing the output in a bunch of "easier" cases of more complicated algorithms with the output of slower, but less error-sensitive algorithms increases reliability
  • Were any methods used to test if the computed output was correct? (e.g. if the inverse B of a matrix A was computed, one could test if AB is the identity matrix)
  • Is the code still being maintained? How often is it updated?

Readability

  • Indentation and formatting consistent
  • Naming of variables is consistent, meaningful and distinctive.
  • Program has a clear structure and split up in functions and files
  • Code is annotated