More than 150 million chest X-ray (CXR) scans are obtained annually in the U.S., making it the most commonly performed radiological exam worldwide. But a shortage of physicians able to interpret these exams hinders the ability to produce quick and accurate diagnoses for patients.
Machine and deep learning startup Zebra Medical Vision Ltd. has set out to change this, revealing this week its research in the development of Textray, a deep learning model solution designed to perform automated chest X-ray analyses.
“The likelihood for major diagnostic errors is directly correlated with both shift length and volume of examinations being read, a reminder that diagnostic accuracy varies substantially even at different times of the day for a given radiologist,” said the authors in their study. “Hence, there exists an immense unmet need and opportunity to provide immediate, consistent and expert-level insight into every CXR.”
CXRs are considered the most difficult of exams by the radiology community with even experts committing clinically substantial errors in 3-6 percent of studies and minor ones in 30 percent.
The worldwide shortage of radiologists often causes this task to fall on radiographic technicians in Africa and Europe and non-radiology physicians in the U.S. for preliminary interpretations to decrease waiting time at the expense of diagnostic accuracy.
Applying a sentence boundary algorithm to 2.1 million CXR studies, the authors identified and tagged a relatively small set of sentences to produce 959,000 studies for training the Textray model to identify the 40 most prevalent pathologies found on CXRs, taking into account patient frontal and lateral scans.
Twelve of the findings were then used to compare Textray’s performance to that of three expert radiologists, producing 95 percent confidence intervals in average agreement rates for ten of the twelve, excluding rib fracture and hilar prominence.
Additional testing of all 40 findings found similarities in performances for most. Comparisons for Vertebral height loss, consolidation, rib fracture, and kyphosis were noted to be accurately detected using lateral view.
“We still have a additional development, validation and regulatory work to do before a product can go to market, but those are the steps we will be taking over the next few months," Elad Benjamin, co-founder and CEO of Zebra Medical Vision, told HCB News. "We believe we can make a significant impact by helping radiologists manage their X-ray workload in a more efficient and accurate way by providing an automated analysis tool for that specific modality."
The unveiling of the Israeli enterprise’s research follows the recent CE markings
for its seventh AI imaging algorithm, designed to detect suspected malignant lesions in mammograms, and
its bleed detection algorithm.
In addition to the unveiling of its research, Zebra Medical Vision also announced its C round funding of $30 million, bring its total investment as a company to $50 million. AMoon Ventures led the investment, which included Aurum, Johnson & Johnson Innovation JJDC Inc., Intermountain Healthcare, and leading global AI scientists, professor Fei Fei Lee and Richard Socher.
The company is currently exploring a similar technique for AP chest X-ray scans, and musculoskeletal and abdominal radiographies.
The findings are available on the arXiv.org e-Print archive and will be presented at the 21st International Conference on Medical Image Computing and Computer Assisted Intervention this September in Granada, Spain.