From reconstructing low-dose images to triaging critical cases, AI is already beginning to change the way some radiologists work, communicate and complete tasks on a day-to-day basis.
For the last couple of years, the industry has seen the writing on the wall — but now real results are starting to emerge.
The fastMRI collaboration between NYU Langone Medical Center and Facebook is a prime example. The project was announced in 2018 as a way to leverage AI to extract information from less data, speeding up MR scans without compromising diagnostic quality. Now, new research published in the American Journal of Roentgenology shows that the team’s scans are just as effective as traditional MR.
“If we are able to accelerate MR, then more people will have the ability to get an MR scan,” Dr. Michael Recht, Louis Marx Professor and chair of radiology at NYU Langone Medical Center, told HCB News. “We anticipate the cost of MR will come down as well.”
Realizing the advantages of AI with actual applications is an exciting frontier for medical imaging, but it demands a measured dose of skepticism in order to succeed. Not all algorithms are created equal, and when limitations go unaddressed, it can adversely impact patient safety.
Faster scans, scheduling and triage
The teams working on fastMRI have produced scans up to four times faster than traditional MR systems with one-fourth of the data acquired. A dramatic reduction in scan time could eradicate problems associated with scanning certain patient populations, such as claustrophobia and the need for sedation.
“If you can reconstruct an acquired image with one half the data we usually acquire or one fourth of the data, it means the examination will be twice or four times as fast,” said Recht. “We use the neural network to, in essence, recognize what information is important in advance and acquire just that information.”
One of the things that make image reconstruction, like many other forms of AI, so exciting is that it operates behind the scenes. “Those applications are going to be built into the scanning device and be part of what the scanner does to produce the images, and no one will necessarily even know that,” said Dr. Daniel Rubin, a professor of biomedical data science, radiology and medicine at Stanford University.
Myriad AI solutions are in development to change every aspect of medical imaging from underneath the surface. For Rubin, the impact these applications will have on upstreaming tasks such as scheduling or determining who requires scanning immediately, in addition to downstreaming tasks such as triaging images, is just as important. Regardless of the pain point an algorithm may address, experts agree that radiologists themselves will remain vital.
“Radiologists are far more than just pattern recognizers. We need to understand the clinical situation, determine what the appropriate imaging exam is, and interpret the results providing context to our referring physicians as well as our patients,” said Recht. “AI can’t do all of that, so I think AI is going to be an assistant to radiologists and help us to be more efficient by doing some of the tasks we don’t need a radiologist to do, such as quantification.”
A glitch in data and judgement
Back in 2012, a deep learning method won the ImageNet Large Scale Visual Recognition Challenge for image classification, reigniting interest in deep learning. In the years that followed, however, reports of instabilities with these methods in classification began to emerge, leading to an avalanche of studies documenting the instability phenomenon in many other areas of deep learning applications.
“If you look at this from a mathematical point of view, you can simply see why this would happen,” said Anders Hansen, a doctor in mathematics and associate professor at the University of Cambridge. “You will end up getting unstable methods where even small perturbations on your input data can cause severe artifacts and even false positives and false negatives,” he said, adding that “In some sense, there is no way you can just run what you have and pray that you have enough data. It has to be much more sophisticated than this.”
The availability of the right type of data for training an algorithm is not the problem. It’s the accessibility, with hospitals reluctant to share the data.
“A number of lawsuits related to patient identifier leakage or access without necessary agreements or patient consent has resulted in further conservative efforts by hospitals to protect their data by not giving it out,” said Rubin.
How AI products are validated also requires improvement, according to Hansen, who says both a mathematical understanding and standardized testing are required to recognize the limits of an AI application and whether or not it is safe for use.
“Having a false positive or false negative is not something that should be evaluated equally.
Having a false negative if you have cancer is much worse than a false positive,” he said. “A false negative will give false assurance you don’t have it and you may not follow up and the cancer will develop. If you’re going to do testing on this, we need to have very sophisticated way of saying, ‘Is this safe and in what sense?’”
Further complicating things, when the limitations in AI development and testing are not fully realized or understood, scientific reports and the media can exaggerate the capabilities of a particular algorithm.
"While we understand that authors of the study don't have full control over what the media will say about their work, the more overpromising you are in your scientific report, the easier it will be for others to spin the findings," said Dr. Myura Nagendran, registrar in intensive care medicine and clinical Ph.D. fellow at the Center for AI in Healthcare at Imperial College London. "Reporters don't always have a full grounding in the scientific and medical details with regard to how much further an algorithm still has to go before being adopted.”
Potential is years away
AI algorithms like AUTOMAP are designed to reduce reconstruction artifacts but are unstable and can cause adverse events that affect patients negatively.
For all the potential of AI to reinvent imaging, the techniques are still in their infancy. Getting to the future where its capabilities are fully realized will require a tremendous amount of caution regarding the quality and safety of each new model.
“The feeling you can get now in medical imaging is that everybody is strictly better than state-of-the-art,” said Hansen. “That statement cannot be true. There is a strong incentive, given the large volume of people working on this, to have a slightly overinflated message about the performance of their methods. That is a challenge and that is why we’re here.”
Reporting guidelines should be followed to check off all items during application testing, said Nagendran, adding that testing in both artificial and real-world environments is essential. He stressed that applications should use very representative sources of data in order to ensure they work on different patient populations.
"It's trying to find the balance between encouraging innovation but also making sure we don't have potentially dangerous products brought in too quickly,” he said. “More randomized trials, which are the gold standards for assessing whether things work and whether the safety profile is effective, will be the way forward.”
Rubin says healthcare stakeholders can help facilitate these innovations through collaborations with providers. “I think the most exciting future prospect is the emergence of networks in which hospitals working together to participate in collaborative networks of learning AI algorithms that tap into large amounts of their collective data.”
“When you put AI and a human together, they outperform either one by themselves and I think that’s going to continue,” said Recht. “I think that’s how we’re going to get the biggest impact and benefit and value from AI.”