Abstract | Summary Within the health sciences, action research is a methodology well suited to the goal of collaboratively improving practice. As the Royal College of Radiology recommends the use of published clinical trials as guides for achieving higher standards of accuracy, it is important for radiologists to reflect deeply on the results from diagnostic accuracy studies. When the results of the gold standard (or reference standard) are used to confirm a particular diagnosis or disease by comparing the diagnostic accuracy to a newer or index test, this is referred to as diagnostic accuracy research. In the reporting of all research, every effort must be made to reduce the incidence of bias. In 2003, the STARD (Standards for Reporting Diagnostic Accuracy) tool was developed for clinicians to enhance the quality of reporting diagnostic accuracy studies. Based on previous studies, experiential knowledge, and an extensive review of the literature, this research demonstrates that the STARD tool is not being fully optimized. The overall aim of this research was to conduct a work-based project within the department of radiology to develop a revised tool, based on the current STARD, which could then be used to more accurately report and interpret the results of radiology diagnostic accuracy trials. This study was conducted in accordance with participatory action research. Methods The development of this new reporting tool was conducted in collaboration with a group of physicians, and in two distinct phases. First, a needs assessment was sent to eight radiological experts who had agreed to participate in the study. Based on their responses, and feedback from my mentor and colleagues, the next phase of tool development was done using the Delphi technique, after two rounds of which consensus was met. Each phase and cycle iteration to complete the needs assessment and Delphi technique are synonymous with the cycles of action research. The new reporting tool was named the RadSTARD (Radiology Standards for the Reporting of Diagnostic Accuracy Studies), and an elaboration document was written to provide guidance to the end-user. Radiology residents and Fellows at The Ottawa Hospital were then asked to rate their level of confidence in interpreting a diagnostic accuracy article specific to radiology while referring to the RadSTARD. They were also provided a second diagnostic article, the STARD tool, and an elaboration document for comparison. Data was collected using questionnaires that allowed for additional comments. Findings The validation phase of the RadSTARD tool was completed via triangulation of data, as both a quantitative and qualitative analysis was completed. The results found no significant statistical difference between the two groups as per the Mann-Whitney and chi-square analysis. Likewise, both physician groups indicated that they found RadSTARD increased their level of confidence when interpreting the diagnostic accuracy article. Concomitantly, when combined, 96% of the two physician groups indicated they would use the tool again. Interpretation These results may be interpreted as generalizable, as there was no discrepancy or statistical difference found in the results between the radiology residents’ and Fellows’ scores, despite the differences in their level of training. Both groups found the RadSTARD tool and elaboration document to be beneficial to them when interpreting the literature. RadSTARD is thus a reliable tool that can be used to validate the results of diagnostic accuracy studies specific to radiology. It will aid radiologists in reporting and interpreting radiology diagnostic accuracy studies, impacting their practice for generations to come. |
---|