A recent systematic review by Rachel Davis, project lead revealed considerable variability in the reporting of research on teaching and training initiatives in implementation science in academic literature. While reporting guidelines to aid the writing up of other types of research exist (e.g. PRISMA guidelines for systematic reviews, TIDieR guidelines for intervention reporting), an equivalent guideline for reporting on educational-focussed activities in implementation science, does not exist. As a result, it is very difficult for implementation science educators and researchers interested in learning from the literature to synthesise the findings and to learn from it in a meaningful way.
The research team hope that these new reporting guidelines will be published on the EQUATOR network repository of reporting guidelines and that academic journals will advise the use of the guidelines, when relevant, in their ‘instructions to authors.’
How the study will be carried out
Researchers will use five sequential stages and a mixed methods approach:
Stage 1 – researchers developed a list of criteria to be consider for inclusion in the guideline. Criteria was informed by a systematic review on teaching and training initiatives that the authors recently published, and a scoping review of reporting guidelines on the EQUATOR network website.
Stage 2 - the criteria was circulated amongst the wider author team. Each author ranked each criterion in terms of whether it: 1) should be included (Yes/No/Not sure); 2) is easy to understand (scale of 1-5, 1= not easy to understand at all vs. 5 = very easy to understand), and; 3) is important (scale of 1-5, 1 = not very important at all vs. 5 = very important). Authors also provided qualitative feedback on the wording of the criteria in terms of how easy it was to understand and its face validity.
Stage 3 – the results of the first round of feedback were collated and the TIRDIP was revised accordingly (into Version 2) and re-circulated with the study team. They will then rank the criteria again using the same scoring system as before (inclusion, understanding, interest).
Stage 4 – TIRDIRP V3 will be produced based on the feedback from Stage 3 and approved by the study team. V3 will then be shared with a wider group of educators involved in developing and/or delivering teaching and training initiatives in implementation science, who will rank the criteria using the same the same scoring system.
Stage 5 – feedback will be collated and incorporated into V4 of the TIRDIRP which will then be circulated amongst the study team for further validation. The study team will check how easy it is to use V4 by completing it in relation to a teaching or training initiative in implementation science they are involved in/have been involved in (12 in total).
Stage 6 – any final revisions to the TIRDIRP will be made based on the results of Stage 5, to produce the final version of the TIRDIRP
Collaboration partners
This project is being conducted in collaboration with members of the Centre for Implementation Science at King’s College London, colleagues from University College London and academics and educators from the United States, Canada and Australia.
Potential benefits of the study
This research will enable greater transparency and consistency in the reporting of teaching and training initiatives in implementation science in the academic literature. In turn this could enable educators and those interested in the research to have a better understanding of the evidence base. Further, the information reported by authors (as a result of using these guidelines) could be used to more clearly inform the future development of capacity building initiatives in implementation science. Ultimately, this could potentially speed up the development of future training and help to address the current unmet need for training in the area of implementation science.
The study was adopted by ARC South London Executive in July 2021 and will be completed by December 2021.