Reproducibility of experiments, one of the foundation stones of science, ought to be easy in computational neuroscience, given that computers are deterministic, and do not suffer from the problems of inter-subject and trial-to-trial variability that make reproduction of biological experiments more challenging.

In general, however, it is not easy, neither for the gold-standard case of reproduction by an independent researcher using independent code [1], nor for the much simpler case of an individual scientist or team being able to replicate their own results some months or years later. For this second case, the reasons include the complexity of our code and our computing environments, and the difficulty of capturing every essential piece of information needed to reproduce a computational experiment using existing tools such as spreadsheets, version control systems and paper notebooks.

In other areas of science, particularly in applied science laboratories with high-throughput, highly-standardised procedures, electronic lab notebooks are in widespread use, but none of these tools seem to be well suited for tracking simulation experiments. In developing something like an electronic lab notebook for computational science, there are a number of challenges:(i) different researchers have very different ways of working and different workflows: command line, GUI, batch-jobs (e.g. in supercomputer environments), or any combination of these; (ii) some projects are essentially solo endeavours, others collaborative projects, possibly distributed geographically; (iii) as much as possible should be recorded automatically; if it is left to the researcher to record critical details there is a risk that some details will be missed or left out, particularly under pressure of deadlines.

I present here a tool, Sumatra, for simulation project management and for automated recording of detailed provenance information: (i) the code that was run, (ii) any parameter files and command line options, (iii) the platform on which the code was run. Sumatra consists of a core library, implemented as a Python package, Sumatra, together with a series of interfaces that build on top of this: a command-line interface, a web interface, and a desktop interface. Each of these interfaces enables (i) launching simulations with automated recording of provenance information; and (ii) managing a simulation project: browsing, viewing, deleting simulations. Alternatively, modellers can use the Sumatra package directly in their own code, to enable provenance recording, then simply launch simulations in their usual way. Sumatra is distributed as open-source software (http://neuralensemble.org/sumatra/), so its functionality can be incorporated in other tools, and developed using a community model: anyone is welcome to get involved with its development.