Registrations are now open!
September 30, 2025
Code for research is more flexible than point-and-click statistical softwares, but can be more error-prone. These errors may be conceptual (e.g., implementing the wrong function for a given task), programmatic (e.g., indexing the wrong column of a data frame), or syntactic (e.g., the incorrect spelling of a statement or function). Although peer review is part of the scientific process, it rarely (though increasingly) involves review of research code. Part of the difficulty with implementing formal peer review of code is the time, expertise, and supportive environment required to successfully execute it. Particularly for more involved analyses, review of code requires significant time to understand the context, questions, data, methods, and aims. It can also be very difficult to identify people with the appropriate skills in both the given code language and the methods to effectively review the code. Lastly, despite peer review itself being a common and integral part of the research process, people are still less prepared to open up their code itself for review and so a constructive, supportive, peer space is necessary. We present a pilot program for peer code review conducted within a research consortium setting, which may represent a useful model to overcoming these challenges. The Australia-Aotearoa Consortium for Epidemic Forecasting & Analytics (ACEFA) aims to support timely and effective responses to epidemic diseases in Australia and New Zealand through real-time data analytics, modelling, and forecasting. One of our main activities this winter is reporting short-term forecasts for daily case counts of several respiratory pathogens, for each Australian state and territory and for New Zealand, to government health committees and stakeholders. We have multiple forecasts from models developed and maintained by one or more research academics in our consortium. In parallel with a review period for the methods in these models, we are also planning to conduct peer code review of models with the following aims in mind:
We will conduct pre- and post- evaluation surveys to evaluate how well we address each aim, and to provide evidence for iterative improvements for future rounds of code review. We will present this initiative as a component of our community of practice, and hope to initiate a discussion with participants about its merits and dissemination of the model.
Saras Windecker
Dr Saras Windecker is a Senior Research Officer at The Kids Research Institute Australia working in infectious disease modelling and software development. Dr Windecker is a member of the Australia-Aotearoa Consortium for Epidemic Forecasting & Analytics (ACEFA), which aims to support timely and effective responses to epidemic diseases in Australia and New Zealand. She is actively involved in the open research community and interested in promoting reproducible research practices.