• Accuracy of radiographer plain radiograph reporting in clinical practice: a meta-analysis.

      Brealey, S.; Scally, Andy J.; Hahn, S.; Thomas, N.; Godfrey, C.; Coomarasamy, A. (2005)
      To determine the accuracy of radiographer plain radiograph reporting in clinical practice. MATERIALS AND METHODS Studies were identified from electronic sources and by hand searching journals, personal communication and checking reference lists. Eligible studies assessed radiographers' plain radiograph reporting in clinical practice compared with a reference standard, and provided accuracy data to construct 2×2 contingency tables. Data were extracted on study eligibility and characteristics, quality and accuracy. Summary estimates of sensitivity and specificity and receiver operating characteristic curves were used to pool the accuracy data. RESULTS Radiographers compared with a reference standard, report plain radiographs in clinical practice at 92.6% (95% CI: 92.0¿93.2) and 97.7% (95% CI: 97.5¿97.9) sensitivity and specificity, respectively. Studies that compared selectively trained radiographers and radiologists of varying seniority against a reference standard showed no evidence of a difference between radiographer and radiologist reporting accuracy of accident and emergency plain radiographs. Selectively trained radiographers were also found to report such radiographs as accurately as those not solely from accident and emergency, although some variation in reporting accuracy was found for different body areas. Training radiographers improved their accuracy when reporting normal radiographs. CONCLUSION This study systematically synthesizes the literature to provide an evidence-base showing that radiographers can accurately report plain radiographs in clinical practice.
    • Accuracy of Radiographers red dot or triage of accident and emergency radiographs in clinical practice: a systematic review.

      Brealey, S.; Scally, Andy J.; Hahn, S.; Thomas, N.; Godfrey, C.; Crane, S. (2006)
      AIM: To determine the accuracy of radiographers red dot or triage of accident and emergency (A&E) radiographs in clinical practice. MATERIALS AND METHODS Eligible studies assessed radiographers red dot or triage of A&E radiographs in clinical practice compared with a reference standard and provided accuracy data to construct 2×2 tables. Data were extracted on study eligibility and characteristics, quality, and accuracy. Pooled sensitivities and specificities and chi-square tests of heterogeneity were calculated. RESULT Three red dot and five triage studies were eligible for inclusion. Radiographers' red dot of A&E radiographs in clinical practice compared with a reference standard is 0.87 [95% confidence interval (CI) 0.85¿0.89] and 0.92 (0.91¿0.93) sensitivity and specificity, respectively. Radiographers' triage of A&E radiographs of the skeleton is 0.90 (0.89¿0.92) and 0.94 (0.93¿0.94) sensitivity and specificity, respectively; and for chest and abdomen is 0.78 (0.74¿0.82) and 0.91 (0.88¿0.93). Radiographers' red dot of skeletal A&E radiographs without training is 0.71 (0.62¿0.79) and 0.96 (0.93¿0.97) sensitivity and specificity, respectively; and with training is 0.81 (0.72¿0.87) and 0.95 (0.93¿0.97). Pooled sensitivity and specificity for radiographers without training for the triage of skeletal A&E radiographs is 0.89 (0.88¿0.91) and 0.93 (0.92¿0.94); and with training is 0.91 (0.88¿0.94) and 0.95 (0.93¿0.96). CONCLUSION Radiographers red dot or triage of A&E radiographs in clinical practice is affected by body area, but not by training.
    • Bivariate meta-analysis of sensitivity and specificity of radiographers' plain radiograph reporting in clinical practice.

      Brealey, S.; Hewitt, C.; Scally, Andy J.; Hahn, S.; Godfrey, C.; Thomas, N. (2009)
      Studies of diagnostic accuracy often report paired tests for sensitivity and specificity that can be pooled separately to produce summary estimates in a meta-analysis. This was done recently for a systematic review of radiographers' reporting accuracy of plain radiographs. The problem with pooling sensitivities and specificities separately is that it does not acknowledge any possible (negative) correlation between these two measures. A possible cause of this negative correlation is that different thresholds are used in studies to define abnormal and normal radiographs because of implicit variations in thresholds that occur when radiographers' report plain radiographs. A method that allows for the correlation that can exist between pairs of sensitivity and specificity within a study using a random effects approach is the bivariate model. When estimates of accuracy as a fixed-effects model were pooled separately, radiographers' reported plain radiographs in clinical practice at 93% (95% confidence interval (CI) 92-93%) sensitivity and 98% (95% CI 98-98%) specificity. The bivariate model produced the same summary estimates of sensitivity and specificity but with wider confidence intervals (93% (95% CI 91-95%) and 98% (95% CI 96-98%), respectively) that take into account the heterogeneity beyond chance between studies. This method also allowed us to calculate a 95% confidence ellipse around the mean values of sensitivity and specificity and a 95% prediction ellipse for individual values of sensitivity and specificity. The bivariate model is an improvement on pooling sensitivity and specificity separately when there is a threshold effect, and it is the preferred method of choice.
    • Evidence of reference standard related bias in studies of plain radiograph reading performance: a meta-regression.

      Brealey, S.; Scally, Andy J.; Hahn, S.; Godfrey, C. (2007)
      The aim is to determine the effect of reference standard related bias on estimates of plain radiograph reading performance using studies conducted in clinical practice. Data were extracted on study eligibility, clinical and reference standard characteristics and reading performance. The choice of reference standards and the prevalence of bias are presented descriptively. Associations between bias and reading performance are estimated using a regression model that produces relative diagnostic odds ratios (RDOR) with 95% confidence intervals (CIs). Three of the 20 eligible studies addressed all five reference standard related biases; 15 studies addressed three or more. When the reference standard report is influenced by knowledge of an observer's opinion this is associated with a significant overestimation in reading performance (RDOR, 3.7; 95% CI, 1.6 to 8.3; p¿=¿0.01). There is limited evidence that reading performance is inflated when the observer is aware of the reference standard report before commenting on the radiograph (RDOR, 1.7; 95% CI, 0.6 to 5.1) and deflated when a less valid reference standard is used (RDOR, 0.5; 95% CI, 0.1 to 2.5). There is no evidence that reading performance is affected by application of the reference standard depending on an observer's opinion and using different reference standards in the same study. In conclusion we found variation in the choice and application of reference standards in studies of plain radiograph reading performance, but only when reference standards report in the knowledge of an observer's opinion does this contribute to a significant overestimation in reading performance.