Published September 29, 2020
It all began with The Roentgen Fund® grant for deep learning cardiac MRI

Albert Hsiao
Associate Professor of Radiology, University of California-San Diego
2019 ARRS Scholar
A specific form of artificial intelligence (AI), called a convolutional neural network (CNN), is rapidly becoming a standard tool for analysis of biomedical images, including radiography, CT, and MRI. Within only a few years, many laboratories have grown to adopt this technology to answer specific medical questions, make technical advances, or enhance clinical workflows. For example, CNNs have shown a remarkable ability to detect lung cancer on CT scans or diabetic retinopathy on fundoscopic photographs. One important reason for this is simplicity and ease of use, as CNNs are capable of learning characteristics or “features” of disease without being explicitly programmed. Another reason for rapid adoption of CNNs is remarkable generalizability, as they can be taught to be insensitive to artifacts, body habitus, technique, or even modality (CT or MRI). Just like humans.
The remarkable power of CNNs has led some physicians to conjure up mythical battles between man and machine, leading radiologists to reflect on how CNNs contribute and provide diagnostic value in medical imaging. With time, many of us now see this technology as a way to enhance our practices, reducing the clutter and labor that limit our ability to engage our higher cognitive skills to diagnose and manage our patients. There are few clinical arenas in diagnostic radiology that are more interesting but beset by more clutter and labor than cardiac imaging. That is, in fact, one of the reasons I was reluctant to pursue this field, initially choosing intervention instead. Fortunately, I was guided to take notice that where there are challenges, there are often extraordinary opportunities.
Several years ago, we began exploring the potential of CNNs to simplify the process of performing and interpreting cardiac MRI. The idea was simple. There were too few technologists proficient in cardiac MRI, and the ability of radiologists to carefully supervise these examinations has been compromised by diminishing reimbursements and increasing clinical volumes. However, if we could use AI to automate the manual tasks on the scanner, we might be able to both improve image quality and reduce the amount of training needed for our technologists. AI could serve as our conduit of knowledge to improve the quality of the imaging we provide. Our efforts quickly showed promise, and we began integrating our approach into clinical software and the MRI scanner with research grants and partnership with GE Healthcare.
The ARRS Scholarship provided me an opportunity to take this project in a new direction, considering it not just as a clinical end-product, but also exploring fundamental questions about how we bring AI to our practices. Specifically, in our proposal entitled “Adaptive Artificial Intelligence for the Acquisition and Analysis of Multiplanar MRI,” we asked: What tools do we have at our disposal to ensure that our algorithms would work well across multiple field strengths and technological improvements in MRI? Imaging equipment is constantly changing and improving, and AI algorithms need to grow and adapt, just as we do. We had some hints that CNNs could have the flexibility to continuously learn. With my brilliant students, we are looking at this from several angles, asking several key questions. Can we predict which images the AI algorithm will fail to process correctly? If so, can we selectively collect these images to teach our AI algorithm to learn from these cases and improve its performance? The results, so far, are promising, and we are preparing a manuscript on this topic for submission.
We also looked at this from another perspective. There are far more x-rays in my practice than cardiac MRIs. Data are king when it comes to developing AI. Further, early and atypical pneumonias can be readily missed by trainees and even by experienced faculty. Could we use the same approach that we use to identify the mitral valve on cardiac MRI to find pneumonia on radiography? Could we enhance our x-rays with color, the same way we use 4D Flow to enhance cardiac MRI? Yes and yes.
Another unexpectedly good application of #4dflow #mri @UCSDHealth! What’s the diagnosis @UCSDRadRes @UCSDCardFellows? Any other views or pulse sequences you would want to see? @SCMRorg @NASCISociety @RSNA @ARRS_Radiology @GeoffRubin @DanielVargasMD @kirschj @dianalitmano pic.twitter.com/dXZSTcaFOP
— Albert Hsiao, MD, PhD (@HsiaoMDPhD) September 23, 2020
We did not anticipate that coronavirus disease (COVID-19) would become so prevalent and change our lives so dramatically, but as it emerged, we had the opportunity to test our AI algorithm on x-rays from some of the first patients diagnosed with COVID-19 pneumonia. Surprisingly, it tracked well with the severity of pneumonia. We have since received funding and support from the University of California Office of the President, National Science Foundation, NVIDIA/Groupware, Amazon, and Microsoft AI for Health to further explore this promising approach to AI.
We did not seek initially to develop an AI algorithm to better diagnose and manage COVID-19 when exploring our strategy for cardiac MRI. However, it certainly goes to show that benefits from investments in research and education are not always linear and predictable. They create opportunities for young clinicians and researchers to tackle important questions that can ultimately shape the future of our field.
Help The Roentgen Fund® provide support to talented young radiologists with your tax-deductible gift. 100% of your donation will go to funding scholarship and fellowship programs. Receive a special thank-you gift from ARRS when you contribute $200 or more, in addition to many other donor benefits.