
Speaking at a JP Morgan conference in January 2024, that 鈥渢his year, every industry will become a technology industry鈥.
During the fireside chat, which preceded NVIDIA鈥檚 of a suite of 25 generative artificial intelligence (GenAI) based microservices designed for imaging and other protocols made available through the company鈥檚 cloud-based tech stack, Huang said medical instruments were soon 鈥渘ever going to be the same again鈥.
鈥淯ltrasound systems, computed tomography (CT) scan systems, all kinds of instruments 鈥 they鈥檙e always going to be a device plus a whole bunch of AIs,鈥 said Huang.
This year, NVIDIA and other big tech players, including , have to make this a reality, forging partnerships with academic institutions to test and develop GenAI foundation models for radiology.
Foundation models are a form of GenAI that analyse vast unstructured datasets. Using a protocol called transfer learning, the hardware in these models can be trained and made applicable to specific fields and present a straightforward means for healthcare systems to adopt GenAI.
The use-case potential of GenAI foundation models in radiology is vast, with the ability to enhance radiological image quality, conduct diagnosis on , and to impact the field more broadly by driving efficiencies.

US Tariffs are shifting - will you react or anticipate?
Don鈥檛 let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataIn addition, radiology represents an especially compelling field for the application of GenAI due to the longstanding reality that radiologists are in short supply on a global scale.
A 2023 report by the Royal College of Radiologists (RCR) found that the UK currently has a that is forecast to rise to 40% by 2028 unless meaningful action is taken, with seven in ten clinical directors stating there were not enough radiologists to deliver safe and effective levels of patient care. Another by the Association of American Medical Colleges (AAMC) forecasts that the radiologist shortfall in the US could reach almost 42,000 by 2036.
Driving efficiencies in radiology
The obvious application of GenAI in radiology lies in improving image quality and aiding physicians in identifying patterns in radiologic images they may not have noticed. While plenty of startups are at work on developing tools for this specific area 鈥 either to automatically straighten up or enhance the fidelity of CT and other imaging scans 鈥 co-founder Jeff Chang sees GenAI鈥檚 foremost application in radiology as taking the load off of radiologists by assisting with report summarisation. As a trained radiologist, Chang calls this one of the most time-consuming tasks undertaken by radiologists.
鈥淲hen you think about what radiologists actually do, we spend most of our time dictating reports,鈥 says Chang.
Looking at radiology images is the straightforward aspect of a radiologist鈥檚 role, Chang says, and there are certain patterns radiologists readily pick out and take note of for their evaluations.
鈥淏ut then, we spend most of our time dictating the report, and we’re dictating 100-200 reports per day, and so the way to save radiologists time is to reduce the amount of time they spend dictating.
鈥淥ur first product, which we released in 2019, automatically generates the last third of the radiology report, which includes the impression section, conclusions, summarisation, and follow up recommendations.
鈥淭hat way, as soon as a radiologist dictates what they see on the images, which forms the first part of the report, our product automatically generates the last part.鈥
According to Chang, Rad AI鈥檚 product saves physicians a median of around one hour per nine-hour shift.
GenAI in radiology also presents an opportunity for personalising patient care, with the ability to synthesise patients鈥 healthcare history data by connecting datasets from different kinds of radiology together.
鈥淚f, for example, a patient in a healthcare system is getting ultrasounds plus CT scans, plus MRI, all of those can be stitched together, to really see the patient journey and include that patient’s clinical information,鈥 says Emily Lewis, AI, and innovation lead at biopharmaceutical company UCB.
鈥淭his is an important aspect in being able to personalise that patient鈥檚 care and put a treatment plan together.鈥
GenAI challenges
GenAI is not currently a foolproof tool since foundation models often get trained on broad datasets, and there is no telling whether all of the information is accurate.
Incorrect data or misleading conclusions presented by a GenAI model are known as hallucinations, and a recent study by AI startup Mendel and the University of Massachusetts Amherst (UMass Amherst), which detects hallucinations in AI-generated medical summaries, concluded that they remain a 鈥済rave鈥 concern to the healthcare industry.
鈥淲e work with over half a billion radiology reports from all across the US, so being able to have that clinical context, having the pre-processing and post-processing capabilities to ensure clinical accuracy, you have to be using a product that does all of that for it to be successful in clinical use,鈥 notes Chang.
Emily Lewis鈥檚 view is that for GenAI systems being used in radiology, keeping a human in the loop to monitor their performance will be needed for the foreseeable future.
鈥淯p until now, AI models have been predicting the next word. It’s just a stochastic parrot where it’s reiterating what it’s been trained on. And because it’s been trained on the entire internet, it’s not necessarily trained on factual information.鈥
This may soon change, however, with the recent release of OpenAI鈥檚 01. According to the company, the models are designed to before they respond, meaning they can reason through complex tasks and solve harder problems than previous models in science, coding, and mathematics.
鈥淭hey’re trying to anchor in more trusted data from respected sources and have that reasoning component,鈥 says Lewis.
鈥淭here should be national standards for ensuring the safety and effectiveness of models, as well as local validation.
鈥淎 lot of that is going to fall on clinical health systems to understand, to have metrics on their models, and make sure they’re monitoring them over the long term.鈥
According to Lewis, this is an especially heavy burden for local level healthcare centres and the main reason why the application of AI in the US is predominantly being seen at large medical centres and academic institutions.
鈥淭hese gold standard places have the funding, resourcing, and a lot of the expertise. It is yet to be seen how this is going to translate to the community health centres of the world.鈥
Pre-trained with radiologic specificity
In essence, foundation models are only as good as the data that adopters train them on. Ethical and privacy concerns and the cost around data acquisition are all playing a role in this challenge, and potentially a lack of regulation around AI on the part of the US Food and Drug Administration (FDA).
For radiology, the remedy to the issue of potentially unreliable or plainly limited data to work upon exists in pre-trained, radiology-specific foundation models such as Australia-based Harrison.ai鈥檚 dialogue-based LLM, Harrison.rad.1.
This emergent alternative is different to the provision of foundation models that adopters then have to train on their own potentially limited datasets, or worse, unvalidated radiological images pulled from the internet.
鈥淗arrison.rad.1 has been trained on real-world, diverse and proprietary clinical data, comprising millions of radiological images, studies and reports,鈥 explains Harrison.ai CEO Dr Aengus Tran.
鈥淭he dataset has further been annotated at scale by a large team of medical specialists to provide Harrison.rad.1 with clinically accurate training information.鈥
Tran demonstrates this by highlighting the performance of the company鈥檚 AI in radiology examinations designed for human radiologists, and claims that Harrison.rad has outperformed other foundational models in several benchmarks.
鈥淪pecifically, it surpasses other foundational models on the challenging Fellowship of the Royal College of Radiologists (FRCR) 2B Rapids examination 鈥 an exam that only 40%-59% of human radiologists pass on their first attempt.
鈥淲hen reattempted within a year of passing, radiologists score an average of 50.88 out of 60. Harrison.rad.1 performed on par with accredited and experienced radiologists at 51.4 out of 60, while other competing models such as OpenAI鈥檚 GPT-4o, Google鈥檚 Gemini-1.5 Pro and Microsoft鈥檚 LLaVA-Med scored below 30 on average.鈥
The potential for GenAI to play an appreciable role in improving existing processes in radiology and alleviating pressures on radiologists in healthcare systems is clear, but barriers remain around the costs associated with the adoption and effective deployment of GenAI, along with the acquisition of sufficient data resources to make adoption worthwhile, although radiology-specific companies offering pre-trained foundation models are changing this. Once regulation around AI in the US catches up with the rate of innovation, and as adoption of the technology in radiology continues, GenAI鈥檚 longevity and its true role in transforming the radiology space for the better should become clear.