For the latest information on vRad’s Artificial Intelligence program please visit vrad.com/radiology-services/radiology-ai/
Early hype implied that artificial intelligence would replace radiologists. Turns out the opposite is closer to the truth. Without radiologists there is zero potential for the future of AI in diagnostic imaging. Radiologists are integral to model development, testing and validation.
We are beyond theory and academic papers and are implementing substantial, results-driven AI solutions in real time to improve patient care, as Imad Nijim reported in his recent post.
For example, vRad AI models score studies for the probability of various pathologies. Results can be used to adjust worklist prioritization, ensuring that patients with a high probability of critical conditions are getting treated as quickly as possible.
In addition, we use AI to monitor for non-emergent cases that have critical pathologies and prioritize them as well. There are many processes in place both at vRad and the facilities we serve to avoid these situations, but if a study were to slip through the cracks of existing processes, AI would be in the background as another layer of assurance that patients get the care they need.
The true potential of AI will only be achieved by engaging both radiologists and technologists every step of the way.
Well-annotated data is the single most important factor in AI success. Before my tech team members can develop and refine algorithms and processes, they seek input from our radiologists. Radiologists pair with our data scientists to define use-cases, annotate images, explain diseases and conditions, and provide guidance. Together we ascertain how software might assist diagnoses, and prioritize where AI assistance might promise the greatest potential to improve patient outcomes.
Clinical expertise is crucial. Radiologist involvement throughout the development and validation processes helps ensure our solutions have high clinical value.
One of our core tenants is to ensure that AI models work with our extremely heterogeneous data set. We call this “validation in the wild.” With more than 2,000 facilities across the United States, our set of DICOM images, tags and reports comprises an extremely diverse group of patients presenting an enormous variety of factor combinations.
Of course, this is the data that our 500 radiologists work with every day. So first, we have to understand how they process this mountain of information into diagnostic insights, providing clinicians with what they need to create the right care plan for each patient. Then we work with the radiologists to develop AI models designed to help improve process efficiency or accuracy.
Finally, validation requires that each AI model is put through “the funnel” – a series of increasingly rigorous testing stages. This includes using natural language processing (NLP) to “measure” the model for specificity and sensitivity. We do measurements in the lab on large data sets, and can also test how a model performs operationally in real time with hundreds of thousands of studies, employing NLP to “read” the reports for comparison against the model.
Radiological expertise is essential in every stage of design, development and validation. Radiologists provide the vision to carefully measure each model against the intended use case, while taking into consideration potential clinical variations.
I recently shared some additional thoughts on imaging AI in a Radiology Business article: “The new era of AI in medical imaging, and what it means for patients.”
For more on our program, visit our website.