AI in Medical Devices: Regulatory requirements

AI regulations

Medical devices are a vital part of the healthcare system. They help provide better care, faster recovery, and lower risks. These devices must be regulated to make sure they are safe for use. In the previous article, we have covered the introduction to using AI in medical devices and how it is one significant stepping stone into the future of AI in healthcare. The second article on AI in medical devices laid down the process and importance of establishing AI systems within medical devices that comply with regulations, reduce risk and harm, and understand the human impact of the model. Finally, in this final blog within the series, we will outline the regulations regarding AI and medical devices with a more in-depth analysis.

Regulating medical devices that incorporate AI

AI is becoming an integral part of medical devices and has helped in the development of new technologies such as diagnostic systems and surgical robots. But with the influx of new technologies means keeping up with new AI regulations and policies that need to be put in place. A strong regulatory framework that considers the special characteristics of AI is essential to ensuring the safety and security of a medical device that incorporates AI.

The FDA — The U.S. Food and Drug Administration is one of the main regulatory bodies that deals with AI and medical devices and has released draft guidance on how it will regulate AI-enabled medical devices and software to ensuring safety and efficacy. The document lays out some proposed requirements for the design, development, evaluation, and labeling of AI-enabled medical devices. The specific set of requirements includes the following:

 -Clinical trials are required to test the device's safety and effectiveness before it can be approved for sale. A clinical trial plan must be designed based on the inherent risks of devices, taking into account the duration of contact with patients, the device's invasiveness, the condition to treat or diagnose, etc. During the clinical trial, the performance of the product will be verified, and any potential side effects will be identified which, in case of being serious, must be immediately recorded and reported to the competent authorities. The person responsible for testing the medical devices will have access to all the technical and clinical data associated with them and, once the research has been completed, will draft and sign a written report, including a critical evaluation of the data obtained during the research. However, the manufacturer's responsibilities do not end upon completion of the clinical trial, since the Declaration of Conformity and the Report with the conclusions of the research must remain available to the competent authorities for a period of five years and more. 

 -The device must be designed to minimize risks from software malfunctions, human errors, and incorrect input data. The FDA has provided guidance on particular risk analysis approaches and procedures like fault tree analysis (FTA) or failure mode and effects analysis(FMEA). By starting to think about risk at the earliest possible in the device or process development, and reviewing those problems in an organized manner throughout the development process, medical device manufacturers can manage and reduce risk effectively. Therefore, manufacturers are advised to prepare premarket submission documentation including the ways to minimize any potential risks related to the design and development of their medical devices. Failing to follow this guideline, the medical devices may end up not being approved, or if approved may end with serious incidents and harms.

 -The manufacturer must provide users with information about how to maintain and troubleshoot their device as well as how to get updates when new features are made available.

- Manufacturers must report any malfunctions or problems with their devices to the FDA within 30 days. These reports, along with data from other sources, can provide critical information that helps improve patient safety in general. However, this is just a passive surveillance system proposed by the regulatory body and there are some limitations to it such as under-reporting of events, inaccuracies in reports, lack of verification that the device caused the reported event, and lack of information about frequency of device use. 

Non-compliance with FDA regulations can not only delay the medical device product production, it may also force you to institute a recall of your product, causing business losses and far-reaching legal consequences. Being proactive in setting up a proper foundation of regulation compliance will save money and help avoid legal issues down the road. Quality, risk mitigation and following regulations is always better if built into the product or services throughout the development process, not ‘inspected in’ later.

Why it is important to follow such proposed regulations is answered by the following specific case. The study done in 2020 puts forward an analysis of data used to train image-based diagnostic AI systems, found that approximately 70% of the studies that were included used data from three states, and that 34 states were not represented at all. Algorithms developed without considering geographic diversity, including variables such as disease prevalence and socioeconomic differences, may not perform as well as they should across a varied array of real-world settings. 

However, critics say that there are still many unanswered questions about how to regulate AI-enabled software that provides clinical advice or diagnosis to physicians or other healthcare professionals. 

One expert is arguing that “There’s increasing concern that AI researchers are building models left and right and nothing's getting deployed, one reason for that is modelers’ failure to perform a usefulness analysis showing how the intervention triggered by a model will cost-effectively fit into hospital operations while also doing less harm than good.” The tools for doing such analysis exist, but what is causing a slow use of them is one question that is raised. 

Another question that is also common among the development companies as well as a leading point to the regulatory landscape when speaking about medical devices and AI: is the AI within the medical device an AI? This question became increasingly relevant in the legal field as regulatory catch-ups with technology and provisions are made specifically relating to AI. It revolves around three main sub-questions that make up the nature of the current regulations. 

Who owns the decision?

 If the medical device using AI is instrumental in creating a decision, who qualifies as the inventor/owner of that outcome? For example, a computer could identify the genetic sequence for a particular disease, then use its computational power to propose a diagnosis.  

The black-box nature of AI 

As AI is used more and more every day, it becomes more and more important for regulators to understand how technology could affect human lives. Understanding the black-box nature of AI is key to ensuring its safe and effective deployment.

Data privacy 

AI-based systems use significant amounts of data to function, and this data is from sensitive collected information from patients' and hospitals' records. The main point here is how to best control the given consent for using this information and also how to give patients back control over their personal data. 

Such questions and concerns are also screened out by the both EU and the UK to develop AI regulations strategies regarding the use of medical devices. 

For instance, the MHRA, which is the Medicines and Healthcare products Regulatory Agency of the UK, proposed the following regulatory guidelines

  1. Utilizing existing and broadly accepted frameworks to ensure AI as a medical device placed on the market provides robust assurance that it is safe and effective, with a special emphasis on ensuring that it is fit for purpose for all populations in which it is intended to be used;
  2. Develop frameworks regarding interpretability of the AI used in medical devices to ensure that the models are sufficiently transparent to be robust and testable or are otherwise properly validated;

To ensure the above guidelines 1 and 2 are met, companies need to have a strategy in place for continuously auditing the safety and effectiveness of the AI. Medical devices with AI are a special type of software with unique risks that must be considered, the proposed regulation might require that AI medical devices used for diagnostics should be monitored for scientific validity to ensure the actual output they are providing correlates to what they would be expected to provide. This corresponds to the first question that was mentioned above. 

  1.  Articulate problems of fit with medical device regulation for adaptive AI in medical devices, distinguishing between models that are locked, batch trained, or continuous learning on streaming data.

To ensure guideline number 3, companies need to have a clear interpretability framework to ensure the AI models are properly validated and sufficiently transparent to be robust which can be achieved through the concept of explainability.

Additionally, the European Medical Device Regulation (MDR) is a set of regulations that governs the production and distribution of medical devices in Europe, and compliance with the regulation is mandatory for medical device companies that want to sell their products in the European marketplace.

As a manufacturer of medical devices, companies must ensure that they meet the relevant regulatory requirements before placing their AI products onto the market; for the EU, these are outlined in the Medical Device Regulation (MDR) (EU) 2017/745 and, for the UK, in the UK Medical Devices Regulations (UK MDR) 2002.

According to this article, the different proposed regulations regarding AI and medical devices from the different countries “must work together to ensure that regulating medical devices with AI and other medical devices does not lead to such a divergence that enforcement becomes unclear…”.To really have effective regulations of medical devices incorporating AI in place, global industry-specific guidance is needed and join cooperation for establishing standards or processes that will later on fall under secondary legislation laws enforced and operated in daily life. 

The risk factor

Why it is difficult to regulate AI not only within the medical devices but in healthcare, in general, is that many risks come with the implementation of AI systems. The risk factors for AI in medical devices include 1) potential safety risks when using an algorithm that does not have enough data; 2) ethical issues such as privacy and confidentiality; 3) lack of transparency around how algorithms work; 4) bias or inaccuracy in data inputted into the algorithm. As bias is one of the most critical issues associated with AI systems, also closely interlinked to the other risks of privacy, and security, there are plenty of regulations that are attempting to deal with it. Biases can be introduced by letting AI take decisions based on its learning process and not on human input. For example, if an AI system is trained with a database of malignant tumors which are more likely to be aggressive than benign tumors, then it may misdiagnose benign tumors as malignant ones because it will have a higher confidence level. This case is clearly pointing out that it is a risky ‘business’ to trust the system with such delicate information. This is why the current regulations and guidelines mainly propose that AI in medical devices must be subjected to robust screening and testing before deployment and production. According to the FDA — The U.S. Food and Drug Administration, this is called preparation of premarket approval (PMA) applications where the FDA will ascertain the safety of the medical device and its AI system and run it through a screening for achieving device clearance.  

To do so, companies must have in place an internal AI governance structure ensuring that the AI system within the medical device operates as intended and complies with the already proposed regulations. An example is having in place AI governance tools, such as compliance check lists with tasks for involving key stakeholders with concrete action points throughout the ML lifecycle. Either it is for different regulatory test checks, or monitoring and auditing the model, the AI governance strategy should use an AI responsible framework containing principles like accountability, fairness, transparency, safety and robustness.

Read more about how to establish an AI governance system here.

Conclusion

The regulation of AI within medical devices is still in its infancy and there are many challenges ahead for the industry regulators to overcome before there are any laws in force. For now, manufacturers of AI-based based medical devices should familiarize themselves with the regulatory guidelines in which they intend to commercialize to ensure that their AI-based medical devices are working as intended. This also includes monitoring the systems to ensure that they are not causing any harm and are avoiding bias in their decision-making.     


This article is part of the series covering the topic of AI and Medical devices, check out here previous ones.