FDA proposes guidance on AI-enabled medical devices

AI devices to increase patients helthcare
Sheila Zabeu -

April 05, 2023

Technological advances applied to healthcare, in particular Artificial Intelligence (AI) and Machine Learning (ML) have prompted the US Food and Drug Administration (FDA), the agency responsible for protecting public health by ensuring the safety, efficacy and surveillance of human and veterinary medicines, to issue guidance to ensure that medical devices powered by these digital technologies are modified, updated and improved safely, effectively and quickly. The agency has already authorized more than 500 medical devices with AI/ML technologies and many more are in the development stage.

The proposed approach aims, according to the FDA, to get safe and effective devices into the hands of healthcare professionals and users more quickly, increasing the pace of innovation that allows more personalised treatments to be offered. Medical devices with AI/ML technologies will be able to be modified more widely and quickly to learn and adapt to patients’ conditions. This means, for example, that diagnostic equipment could be built to adapt to the specific data and needs of healthcare facilities, and therapeutic equipment could do the same according to the specific characteristics and needs of each patient.

To ensure the safety and effectiveness of medical devices with AI/ML technologies throughout their product life cycle (TPLC), a predetermined Change Control Plan will be used for each device that will be reviewed and approved by the FDA. This plan allows for both manually implemented and automatically software implemented changes. The plan will include a detailed description of the specific and planned changes to the devices; a description of the methodology used to develop, validate and implement the changes, including a description of how the necessary information about the changes will be clearly communicated to users; and an assessment of the benefits and risks of the planned changes.

FDA’s approach also aspires to ensure that important performance issues, including regarding race, ethnicity, disease severity, gender, age and geographic considerations, are addressed in the development, validation, implementation, and monitoring of devices with AI/ML technologies. Specifically, this FDA guidance wants to place emphasis and more importance on clearly communicating this information to users of medical devices.

According to the FDA, this approach is grounded in the experience the agency has gained in regulating devices with AI/ML technologies recently and in exploring new regulatory frameworks for medical devices in the digital health arena.

Still at a preliminary stage, the FDA guidance will still consider stakeholder comments until July 3 before then issuing a final version of the guidance.

AI in health

The recent buzz around ChatGPT, an example of generative AI that uses deep learning to create synthetic data that looks real, has also spilled over into healthcare. The need to address ethical and quality issues of these years is undeniable, as is the enormous potential of this technology.

According to Ittai Dayan, CEO and co-founder of Rhino Health, developer of a platform aimed at developers and researchers looking to analyse data and create AI models in healthcare, there are many interesting applications of generative AI in the field of medicine and healthcare treatments, among them:

  • Genomics: Generation of synthetic DNA sequences for gene editing experiments and testing the effects of different genetic variations.

  • Drug design: Generating new molecular structures optimized for certain outcomes that obey the laws of chemistry and physics, testing them in simulations to find the most suitable ones for some kind of treatment.

  • Clinical trial design: Synthetic data generation for clinical trials that can accelerate previously expensive processes by creating a wealth of data for simulation and hypothesis testing, covering diverse patient populations, treatment outcomes and adverse events.

  • Medical imaging: Generating synthetic medical images to augment existing datasets, for example by creating new X-ray or MRI images with specific features that are missing from existing sets. Generative AI can also be used to generate images from text inputs.

  • Personalized medicine: Generation of personalized treatment plans, such as diets or specific exercises, based on the patient’s characteristics.

  • Medical education: Generative AI can be used as a training tool, somewhat similar to games for medical students with patient pictures to test diagnostic or treatment planning skills.

For Dayan, these applications are unlikely to reach their full potential without the use of Federated Learning (FL) techniques for patient privacy preservation. Regulatory frameworks in the US (HIPAA) and Europe (GDPR) are justifiably conservative when it comes to what kind of data can be shared with AI model developers. In general, this data may be available isolated, disaggregating the patient journey that includes diagnosis, treatment, and outcome. If presented in full, this data would be richer for AI/ML developers, but would also present a higher risk of re-identification of patients.

The FL technique is based on distributed Machine Learning, keeping the data used by the models on their owners’ premises (e.g. hospital servers), without sharing them. Only the model parameters are communicated for aggregation and updating.

Dayan and Mona Flores, global head of medical AI at Nvidia, conducted a study that demonstrated the feasibility and benefits of federated learning in the healthcare domain. A model was developed using local data and data in a federated network to predict outcomes for patients who attended the emergency department with respiratory complaints. The research demonstrated that Federated Learning can enable hospitals to collaborate and provide access to data without compromising patient privacy and safety.