AI
— An Introduction to —
NAVIGATING AI AND MEDICAL DEVICE REGULATIONS
Empowering AI excellence with transparency & trust
START

An Introduction to Navigating AI and Medical Device Regulations

With the rapid adoption of Artificial intelligence (AI) used in medical devices, navigating the intricate landscape of regulatory compliance has become paramount. At regenold, we have built up an experienced team, offering expert guidance to businesses seeking help with compliance to the upcoming EU AI Act and other regulatory requirements for medical device certification. Typical questions regarding the EU AI Act can be found below. If you have any further questions, we will be happy to help you with our expertise.

Questions & Answers:

Can a Medical Device with AI be CE-marked?

CE-marking of Medical Devices which contain AI is possible and Notified Bodies refer to a checklist in order to ensure compliance with MDR requirements: www.ig-nb.de

In the beginning the Notified Bodies focused on assessing Medical Devices with so called “locked” AI which indicates that the algorithm has been trained before placing the product on the market but will not be trained “live” afterwards. Instead, re-training is performed in a protected background until the new algorithm is re-validated and re-assessed by the Notified Body before it is uploaded again. In the meantime, Notified Bodies also certify “continuous learning systems” which are trained "live” after being placed on the market. The certification process considers FDA guidelines, in particular the "Predetermined Change Control Plan" (PCCP), which describes how continuous learning takes place in a controlled manner: www.fda.gov

What is the EU AI Act (AIA)?

The European AI Act represents a groundbreaking framework aimed at fostering innovation whilst ensuring ethical and trustworthy AI practices. It is a risk-based approach and defines mandatory requirements applicable to the design, development and operation of AI systems and regulates the way market surveillance is to be conducted. The AIA thus introduces additional requirements to the already existing Medical Device Regulation (MDR). Our team is there to help you prepare alignment and compliance of your AI enabled medical devices with both the MDR and the AIA.

Do I have to follow the AIA’s regulations?

Yes, once the AIA becomes effective, all Medical Device Software that includes the use of AI have to ensure, that they comply with the regulations of the AIA. Companies will likely be granted a compliance deadline of two years once the regulation is enacted. Considering that significant changes to your product later on require a considerable amount of time and resources, we highly recommend proactively integrating the anticipated provisions of the AIA directly into your development process.

Which risk categories are there for AI systems?
  • Unacceptable Risk: This category contains AI systems that intended to influence human behavior, take advantage of vulnerabilities of certain groups, or distort behavior by the use of subliminal messaging.
  • High Risk: This category includes AI systems with a high potential for harm. Medical Device Software using AI with a classification of IIa and higher will be categorized as high risk since they are Medical Devices that are required to undergo a third-party conformity assessment.
  • Low Risk: This category includes AI systems that are assessed to have a low potential of causing harm. Although these systems have less strict regulations, they must nonetheless comply by a number of essential guidelines that ensure safety and the upholding of fundamental rights.
  • Minimal Risk: This category is for AI systems with minimal or no risk like AI systems. This includes AI systems interacting with people like chatbots or recommendation systems as well as administrative AI systems like Spam filters or predictive-maintenance systems. Administrative AI systems need no regulatory compliance, while generated discussions and deepfakes have to be labeled.
What are the additional key requirements from the EU AI Act?

Many factors play a role here:

  • Devices shall be designed and developed to meet accuracy, robustness, cybersecurity and human oversight requirements.
  • Processes shall be integrated into the Quality Management System for data management, including data collection, data analysis, data labelling, and more.
  • Performance shall be monitored and evaluated during the post-market phase.
In the context of the EU AI Act, what does accuracy, robustness, and human oversight mean?

Accuracy: Calculated performance indicator of how well the decisions or predictions of the AI system match reality. The level of accuracy shall be communicated to the user.

Robustness: Represents the AI systems’s resilience and stability when faced with real-world challenges, uncertainties and variations in input data or operational conditions.

Human Oversight: Enabling users to monitor and interpret the outputs and provide means for them to intervene if necessary.

Who can help me with complying with the AIA?

We offer comprehensive assistance:

  • EU AI Act – Gap Assessment
  • SW development
  • AI support
  • Assumption of the manufacturer function (Contract Manufacturer Service)
  • Regulatory strategy
  • Clinical strategy
  • Communication with Notified Body

Have a question that you didn't find here?
We're more than happy to answer any of your inquiries:


CONTACT US TODAY!