Browsing Health Studies by Subject "Radiographers"
Now showing items 1-3 of 3
Accuracy of Radiographers red dot or triage of accident and emergency radiographs in clinical practice: a systematic review.AIM: To determine the accuracy of radiographers red dot or triage of accident and emergency (A&E) radiographs in clinical practice. MATERIALS AND METHODS Eligible studies assessed radiographers red dot or triage of A&E radiographs in clinical practice compared with a reference standard and provided accuracy data to construct 2×2 tables. Data were extracted on study eligibility and characteristics, quality, and accuracy. Pooled sensitivities and specificities and chi-square tests of heterogeneity were calculated. RESULT Three red dot and five triage studies were eligible for inclusion. Radiographers' red dot of A&E radiographs in clinical practice compared with a reference standard is 0.87 [95% confidence interval (CI) 0.85¿0.89] and 0.92 (0.91¿0.93) sensitivity and specificity, respectively. Radiographers' triage of A&E radiographs of the skeleton is 0.90 (0.89¿0.92) and 0.94 (0.93¿0.94) sensitivity and specificity, respectively; and for chest and abdomen is 0.78 (0.74¿0.82) and 0.91 (0.88¿0.93). Radiographers' red dot of skeletal A&E radiographs without training is 0.71 (0.62¿0.79) and 0.96 (0.93¿0.97) sensitivity and specificity, respectively; and with training is 0.81 (0.72¿0.87) and 0.95 (0.93¿0.97). Pooled sensitivity and specificity for radiographers without training for the triage of skeletal A&E radiographs is 0.89 (0.88¿0.91) and 0.93 (0.92¿0.94); and with training is 0.91 (0.88¿0.94) and 0.95 (0.93¿0.96). CONCLUSION Radiographers red dot or triage of A&E radiographs in clinical practice is affected by body area, but not by training.
Bias in plain film reading performance studies.Radiographers and other healthcare professionals are becoming increasingly involved in radiological reporting, for instance plain radiographs, mammography and ultrasound. Systematic reviews of research evidence can help to assimilate a knowledge base by ordering and evaluating the available evidence on the reporting accuracy of different professional groups. This article reviews the biases that can undermine the results of plain ¿lm reading performance studies. These biases are subdivided into three categories. The ¿rst category refers to the selection of subjects, including both ¿lms and professionals, and covers the validity of generalizing results beyond the study population. The other two categories are concerned with study design and the interpretation both of ¿lms and of reports and the effect on study validity. An understanding of these biases is essential when designing such studies and when interpreting the results of existing studies.
Methodological standards in radiographer plain film reading performance studies.The objectives of this paper are to raise awareness of the methodological standards that can affect the quality of radiographer plain-film reading performance studies and to determine the frequency with which these standards are fulfilled. Multiple search methods identified 30 such studies from between 1971 and the end of June 1999. The percentage of studies that fulfilled criteria for the 10 methodological standards were as follows. (1) Performance of a sample size calculation, 3%; (2) definition of a normal and abnormal report, 97%; (3) description of the sequence of events through which films passed before reporting, 94%; (4) analysis of individual groups of observers within a combination of groups, 50% (5) appropriate choice of reference standard, 80%; (6) appropriate choice of arbiter, 57%; (7) appropriate use of a control, 22%; (8) analysis of pertinent clinical subgroups, e.g. body areas, patient type, 44%; (9) availability of data for re-calculation, 59%; and (10) presentation of indeterminate results, 69%. These findings indicate variation in the application of the methodological standards to studies of radiographer's film reading performance. Careful consideration of these standards is an essential component of study quality and hence the validity of the evidence base used to underpin radiographic reporting policy.