vRad's massive, diverse dataset is an incomparable real-world testing ground for validating radiological AI models. Following is an excerpt of Brian Baker’s comments from a recent interview in The Imaging Wire.
Partnerships critical to AI modeling and validation
A quick history of the vRad AI Incubator is important. vRad has been working with AI partners since 20l5 in various forms with the primary goal of improving patient care. Qure.ai was one of the earlier partners in that process. Before the incubator was officially launched in 20l8, Qure.ai was already collaborating with vRad on advanced solutions.
One important thing we bring to these AI partnerships is our massive and diverse data. vRad Radiology Solutions has ~2,100 sending facilities in all 50 states. We have radiologists all across the country reading over 7.2 million studies on the vRad Imaging Platform. We have an enormous, heterogeneous data set. The data are not only representative of a very diverse population, but also a very diverse set of modality models, configurations, and protocols.
My primary focus for AI at vRad is first and foremost patient care – helping patients is our number one goal. But also important, we want to foster a community of AI partners and use models from those partners in the real world. A big part of that is building models and validating models.
Qure.ai came to us with models already built on different data sets. They didn't need our data set to perform additional model training, but they wanted to do real world validations to ensure their models and solutions were generalizing well in the field against an extremely large and diverse cohort of patients.
That is where the relationship blossomed. Our partnership first focused on the complex aspects of how we see different use cases from a clinical standpoint; we very much align on both use cases and pathologies; this alignment is a critical step for everyone – AI vendors and AI users in radiology alike. The clinical nuances to using a model in production are incredibly intricate, and Qure.ai and vRad's convergence in this area is a large part of our success.
Creating solutions that work across digital platforms
I believe only half the problem is proving your sensitivity and specificity with a large, diverse patient cohort. That is obviously extremely important for clinical and ethical reasons, but the other part of the problem to solve is figuring out how to ensure that a solution or model works on all the various types of digital imaging and communications in medicine (DICOM) in the industry. At vRad, we see everything in DICOM that you can imagine and some you would not believe. That might be anything from slightly weirdly-formed DICOM, to data in non-standard fields where it shouldn't be, or secondary captures or other images inside of the study, down to all the protocols involved in imaging (how the scan is actually acquired). With our scale and diversity of data, a model that can operate without erroring and crashing through a single night is an engineering feat on its own.