
Nadja Kadom
Department of Radiology and Imaging Sciences Emory University; Pediatric Imaging Assistant Editor, AJR
Cindy Lee
Department of Radiology NYU Langone Health
Competency of radiologists is an important factor in the delivery of high-quality patient care. To meet maintenance of certification (MOC) and Joint Commission requirements for ongoing professional performance evaluation (OPPE), radiologists are participating in peer review, a system that uses accuracy of interpretation as a surrogate marker for competency. A widely used example of such a system is RADPEER, a web-based product that was developed by the American College of Radiology (ACR).
Benefits of peer review as a means of assessing radiologist competency include: Availability of commercial products; accreditation that is accepted by the Joint Commission; credit towards American Board of Radiology MOC part 4; and familiarity with established peer review systems. Existing peer review systems are generally set up to create as little additional workload as possible, which fosters compliance.
There are, however, several issues regarding peer review systems, including great variability in how peer review data are collected. Some programs select cases randomly, whereas others allow physicians to select cases, which could introduce bias or result in case selections that are quick to review. There is also variability in how peer review is executed (i.e., the number of cases to review and the frequency of peer review).
Systems that select cases for peer review—and that are not integrated in the daily exam reading workflow—carry a risk that participants will wait until the deadline and then hastily review a large number of cases, calling into question the quality of such reviews. Most peer review participants are never formally instructed on how to use the scoring system, and lack of examples for each scoring category can result in scoring inconsistencies Peer review participants prefer anonymous review, although in most practices, both the reviewer and the reviewed can identify each other. Friendly and unfriendly relationships may influence the scoring of agreement. There is often a stated peer review goal, such as achieving a less than 5–10% disagreement rate, which can create bias and result in under-reporting. Regarding the use of peer review data, as a ground rule, data should be collected and reported in a fashion that does not invite medicolegal action or repercussions at the local level. Allowing peer review data to be reviewed by other radiologists or officials at higher levels of the organization incites fear in participants that could lead to adverse effects on relationships within the department or the organization. Participants may need transparency regarding local and state policies and medicolegal safeguards in order to trust and honestly use the review system. Whereas some radiology practices have even used peer review data to terminate radiologists’ contracts, instead, peer review should serve to coach and judge at the same time. Moreover, there is no evidence to suggest peer review is an appropriate tool for identifying radiologists who are inadequate performers.
To improve patient outcomes, we need to move from quality assurance to quality improvement. Our goal should be to reduce diagnostic errors, which contribute to 10% of patient deaths and 6–17% of adverse events in hospitals.
Radiologists should be able to learn from their mistakes—an essential component of improving patient safety. Disclosing and discussing errors for learning and improvement purposes requires a so-called Just Culture that acknowledges human error, avoids blame, promotes fair accountability, and focuses on fixing system deficiencies. Just Culture often goes hand-in-hand with Safety Culture, which is found in high-reliability organizations, where error rates are very low. This culture entails, among other traits, embracing every opportunity to learn from mistakes and fix systems and processes for error prevention.
Several sources are available for providing timely feedback. For example, during the read-out, a radiologist may find an error after reviewing the prior study. Consultation with a referring clinician and second review of studies for multidisciplinary conferences can reveal errors, comparison of pathology or surgical reports with imaging results can uncover errors, and complaints to radiology leadership or incident reporting systems can disclose errors. Radiologists may decide which system for feedback they prefer, be it a collegial email, a templated email, or if this information should be routed through third parties, such as section chiefs or quality officers. Many automated systems are available that facilitate peer feedback as part of the daily workflow rather than a separate activity, such as integration with PACS, dictation systems, or other apparatuses that automatically send surgical and pathology results to the radiologist who made the diagnosis. Each method has different ramifications regarding medico-legal discoverability, which may be an important consideration for practicing radiologists.
Recommendations for best practices to raise learning opportunities to the group level include establishing a small committee to select cases with the highest learning potential, removal of all identifying case information and anonymity of the radiologist who interpreted a case, recording of peer learning conferences to enable asynchronous participation, avoiding blame or finding fault, encouraging discussion of pitfalls, mimics, and strategies for error prevention, and even providing relevant scientific references. Compliance with ongoing OPPE can be achieved in ways other than peer review discrepancy rates, such as recording participation in peer learning conferences, case submissions, or improvement initiatives that were completed as a result of peer case discussions.
In 2019, the ARRS Performance Improvement Subcommittee decided to tackle the topic of transitioning from peer review to peer learning. The subcommittee assumed that most peer review programs focus on error detection, numerical scoring, and radiologist-specific error rates, with questionable effectiveness regarding learning and systemic improvement. The subcommittee created a 21-question, multiple-choice survey, and this survey was emailed to 17,695 ARRS members; 742 (4.2%) responded. Most respondents were in private practice (51.7%, 283/547) with a size of 11–50 radiologists (50.5%) and in an urban setting (61.6%). Significant diversity was noted in several aspects regarding peer review systems, including use of RADPEER (44.3%), selection of cases by commercial software (36.2%) versus manually (31.2%), and varying numbers of cases mandated for review per month (range < 10 to > 21). Interestingly, > 60% of respondents noted using peer review for group education. A great need for turning peer review into peer learning exists, as almost half (44.5%) of respondents reported being dissatisfied with their current process, stemming from insufficient learning (> 70%) and a sense of inaccurate performance representation (57.1%). Most respondents found the following peer review methods feasible in daily practice: incidental observation (65%), focused practice review (52.9%), professional auditing (45.8%), and blinded double reading (35.4%). Overall, it seems that a majority of practicing radiologists have already migrated toward peer learning systems and consider a variety of workflow-integrated review methods practicable. Establishing a peer learning system may require investments in staff and time, but as evidence continues to mount that peer learning—as opposed to traditional peer review—results in improved practice, better patient outcomes, and higher radiologist satisfaction, these investments appear to be justified.
The opinions expressed in InPractice magazine are those of the author(s); they do not necessarily reflect the viewpoint or position of the editors, reviewers, or publisher.