Subscribe to our Newsletter!
By subscribing to our newsletter, you agree with our privacy terms
Home > IT Monitoring > Healthcare IT Monitoring > FDA proposes guidance on AI-enabled medical devices
April 05, 2023
Technological advances applied to healthcare, in particular Artificial Intelligence (AI) and Machine Learning (ML) have prompted the US Food and Drug Administration (FDA), the agency responsible for protecting public health by ensuring the safety, efficacy and surveillance of human and veterinary medicines, to issue guidance to ensure that medical devices powered by these digital technologies are modified, updated and improved safely, effectively and quickly. The agency has already authorized more than 500 medical devices with AI/ML technologies and many more are in the development stage.
The proposed approach aims, according to the FDA, to get safe and effective devices into the hands of healthcare professionals and users more quickly, increasing the pace of innovation that allows more personalised treatments to be offered. Medical devices with AI/ML technologies will be able to be modified more widely and quickly to learn and adapt to patients’ conditions. This means, for example, that diagnostic equipment could be built to adapt to the specific data and needs of healthcare facilities, and therapeutic equipment could do the same according to the specific characteristics and needs of each patient.
To ensure the safety and effectiveness of medical devices with AI/ML technologies throughout their product life cycle (TPLC), a predetermined Change Control Plan will be used for each device that will be reviewed and approved by the FDA. This plan allows for both manually implemented and automatically software implemented changes. The plan will include a detailed description of the specific and planned changes to the devices; a description of the methodology used to develop, validate and implement the changes, including a description of how the necessary information about the changes will be clearly communicated to users; and an assessment of the benefits and risks of the planned changes.
FDA’s approach also aspires to ensure that important performance issues, including regarding race, ethnicity, disease severity, gender, age and geographic considerations, are addressed in the development, validation, implementation, and monitoring of devices with AI/ML technologies. Specifically, this FDA guidance wants to place emphasis and more importance on clearly communicating this information to users of medical devices.
According to the FDA, this approach is grounded in the experience the agency has gained in regulating devices with AI/ML technologies recently and in exploring new regulatory frameworks for medical devices in the digital health arena.
Still at a preliminary stage, the FDA guidance will still consider stakeholder comments until July 3 before then issuing a final version of the guidance.
The recent buzz around ChatGPT, an example of generative AI that uses deep learning to create synthetic data that looks real, has also spilled over into healthcare. The need to address ethical and quality issues of these years is undeniable, as is the enormous potential of this technology.
According to Ittai Dayan, CEO and co-founder of Rhino Health, developer of a platform aimed at developers and researchers looking to analyse data and create AI models in healthcare, there are many interesting applications of generative AI in the field of medicine and healthcare treatments, among them:
For Dayan, these applications are unlikely to reach their full potential without the use of Federated Learning (FL) techniques for patient privacy preservation. Regulatory frameworks in the US (HIPAA) and Europe (GDPR) are justifiably conservative when it comes to what kind of data can be shared with AI model developers. In general, this data may be available isolated, disaggregating the patient journey that includes diagnosis, treatment, and outcome. If presented in full, this data would be richer for AI/ML developers, but would also present a higher risk of re-identification of patients.
The FL technique is based on distributed Machine Learning, keeping the data used by the models on their owners’ premises (e.g. hospital servers), without sharing them. Only the model parameters are communicated for aggregation and updating.
Dayan and Mona Flores, global head of medical AI at Nvidia, conducted a study that demonstrated the feasibility and benefits of federated learning in the healthcare domain. A model was developed using local data and data in a federated network to predict outcomes for patients who attended the emergency department with respiratory complaints. The research demonstrated that Federated Learning can enable hospitals to collaborate and provide access to data without compromising patient privacy and safety.
May 24, 2023
May 17, 2023
May 16, 2023
May 10, 2023
May 03, 2023
April 27, 2023
April 07, 2023
March 29, 2023
Previous
World Health Day - contributions and challenges of digital technologies
Next
Biosensor bandage helps in the recovery of chronic wounds