All signs point to artificial intelligence (AI) as radiology’s next frontier, as it integrates automation with ever-improving accuracy. Be it the hidden truths in women’s imaging revealed by Marcia C. Javitt in this issue of InPractice or Patrick M. Colletti’s examination of deep learning cardiopulmonary developments, indeed, AI abides—begging questions of and fostering debates on how to ethically implement and assess AI, machine learning, predictive analytics, and other emerging algorithms to best serve both radiologists and patients.
In recent months, international imaging societies and the U.S. Food and Drug Administration (FDA) have unveiled documents endeavoring to establish guidelines for ethical concerns arising from the accelerating advances of AI in medical imaging. Having convened a working group comprised of practicing radiologists, computer scientists, data scientists, and related AI professionals, on February 21, the Royal Australian and New Zealand College of Radiologists (RANZCR) published a 52-page primer, Ethical Principles for AI in Medicine, intended to “complement existing medical ethical frameworks” for the training, deployment, and fair use of AI tools in radiology and radiation oncology. Accompanied by a call for public comment, for six days at least, RANZCR’s eight principles were the only ones of their kind proffered by a professional healthcare body:
- Avoidance of Bias
- Transparency and Explainability
- Privacy and Protection of Data
- Decision Making on Diagnosis and Treatment
- Liability for Decisions Made
- Application of Human Values
“Of all the places that we could jump in,” RANZCR President, Lance Lawler responded to InPractice, “why start with ethics? In radiology, where we heavily regulate the doctors and the imaging machines, lest they drift out of specification and patient harm happens, it is inconceivable that the AI tools wouldn’t be regulated in some way. In an attempt to have something on which to base these conversations, we started with ethics. The theory is that ethics help build practice standards, and standards form the basis for regulation”.
Less than a week later, on February 26, a consortium of seven major radiology organizations—American College of Radiology, European Society of Radiology, Radiological Society of North America, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists, American Association of Physicists in Medicine—published Ethics of AI in Radiology. This 38-page document was the product of a diverse cohort: North American and European radiologists, imaging informaticists, medical physicists, patient advocates, attorneys, and a bona fide philosopher. Noting that its preliminary draft was “aspirational rather than prescriptive” and that it sought to “foster trust among all parties that radiology AI will do the right thing for patients and the community,” the multi-society assembly highlighted three principles accordingly:
- The ethics of data—including informed consent, data privacy, ownership and transactions of patient data—and technical and social issues related to bias
- The ethics of algorithms and considerations to verify their safety and moral use
- The ethics of practice, including practice-level policies to do the right things for patients, in order to minimize inequalities related to resources and potential gain
“Having kept the writing group fairly small to facilitate an accelerated turnaround, it’s now critical to get extensive comments from the broader imaging and healthcare ecosystem,” wrote Geraldine McGinty, chair of the ACR Board of Chancellors. “We also anticipate that, given the pace of change in this sphere, this document will be a living one.” Apropos of AI’s quickening evolution, the committee intends to release an updated version.
In both the RANZCR and the multi-society papers, patient data is paramount, each one stressing the consequence of protecting access to and preserving the security of information utilized for algorithmic research and training. Acknowledging that no system’s firewall is unassailable, each document nonetheless insists that every effort must be made to protect patient privacy. Specifically, the multi-society paper emphasized threats with data transfer because any unsecured transmission creates a “risk that bad actors with access to medical data could extort patients who have aspects of their medical history that they wish to remain private.”
The two papers also discuss the adoption of best practices for the avoidance of bias in the AI apparatus. To help minimize prejudicial potential, RANZCR asserted that “the characteristics of the training data set and the environment in which it was tested must be clearly stated when marketing an AI tool to provide transparency and facilitate implementation in appropriate clinical settings.” Moreover, declared RANZCR, “particular care must be taken when applying an AI tool trained on a general population to indigenous or minority groups.” The more varied the data it acquires, the more equitable an AI tool’s intelligence becomes, welcoming what the multi-society assembly deemed an “opportunity to invite diverse stakeholders to audit the models for bias.”
Right now, however, both the RANZCR and the multi-society papers affirmed that the most urgent ethical question facing AI in radiology is a determinate one: what role should artificial intelligence play in the decision-making processes of radiologists at large? Once more, RANZCR and the multi-society papers sustained restriction; all AI-guided means must be confined to advisory roles only. “Final decisions,” RANZCR maintained, “are recommended by the doctor with due consideration given to the patient’s presentation, history, and preferences.” In addition, the two papers advised radiologists to be wholly and progressively more transparent regarding the diagnostic and therapeutic functions they, themselves, perform. “As complex dynamic networked systems evolve, it may be difficult to attribute responsibility among different AI agents, let alone between machines and humans,” warned the multi-society paper.
Amplifying radiology’s response to the need for oversight of AI, on April 2, the FDA released a white paper, Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device, concerning a classification of adaptive AI systems wherein performance changes based upon uninterrupted exposure to new clinical data (i.e., “a continuous learning algorithm”). In an accompanying press release outlining his agency’s reaction to this real-world paradigm, outgoing FDA Commissioner Scott Gottlieb noted that the AI technologies cleared by the FDA thus far have been “locked,” that they do not recursively learn each subsequent iteration. Locked algorithms are updated at predetermined intervals by the manufacturers, who are equipped to sequence the procedural mechanism and verify that any revision manifests as intended.
In its new agenda, the FDA addressed an inflection point for healthcare professionals and patients alike—acclimatizing algorithms that learn without manufacturer intervention. Perhaps more so than AI as a tool, at this moment in contemporary radiology, the ethical protocols for machine learning, predictive analytics, and other algorithms are in a fledgling state. As the latest developments in the field illustrate, now is the time to consider the myriad ethical implications of artificial intelligence in radiology.
The opinions expressed in InPractice magazine are those of the author(s); they do not necessarily reflect the viewpoint or position of the editors, reviewers, or publisher.