mgregulatory.com

All AI Medical Devices are High-Risk?

AI promises to enhance diagnostics, personalise treatments, and optimise resource allocation across the healthcare systems (Public Health). In recognition of its potential and risks, the EU adopted the Artificial Intelligence Act (Regulation (EU) 2024/1689), which entered into force on 1 August 2024 (team-nb). Unlike sector‑specific laws, it applies horizontally, using a risk‑based approach to regulate AI systems according to their potential impact on safety and fundamental rights.

Telecharge la Checklist for AI Medical Devices

Why Medical Device AI Is “High‑Risk”

This question may surprise some readers, but yes, most AI in Medical Devices is considered “High Risk.” Let’s review again where this is written and what is written.

Under the AI Act’s risk‑based scheme, AI systems fall into three tiers:

Prohibited practices

High‑risk AI

Limited‑risk or other AI,

Important point: Any AI‑enabled software that qualifies as a medical device (MDR 2017/745) or IVD (IVDR 2017/746) and requires Notified‑Body review is automatically a high‑risk AI system under the AI Act.

Easy Medical Device AI High Risk MD
The AI Act defines 4 levels of risk for AI systems:

So, hearing this, you may ask, but is there any possibility for my device to avoid following the AI ACT because the risk is not high. By definition, if your device is not reviewed by a Notified Body, then you may be able to escape that. So Class I or Class A IVD,

Emergo by UL underscores this in plain language:

“An AI‑enabled medical device or IVD which requires Notified Body involvement … would likely be categorized as a high‑risk AI system. Exceptions may exist only for Class I devices where no NB is involved—but in practice, most AI‑containing medical devices fall into high‑risk.” .

Why this matters to you

Let’s get one thing straight: your little AI tool for sniffles can’t just call itself “low‑risk” because it seems harmless.

Nope. If it needs a Notified Body stamp under the MDR/IVDR, it’s automatically “high‑risk” under Article 6(1)(b)—no excuses allowed.

And don’t even think about skimping on your paperwork: right in your Technical Documentation, note the device class, the Notified Body checks, and exactly why you believe the law applies.

That way, when someone asks, “But is it high‑risk?” you can confidently say, “Check Annex III—it’s spelled out in black and white.”

In short: if your AI needs a third‑party review, it’s a high‑risk period, end of story.

Telecharge la Checklist for AI Medical Devices

Key Obligations for High‑Risk Medical Device AI

Before diving into the specific requirements, it’s helpful to understand why these obligations matter—and how they slot into your existing quality and regulatory framework.

Under the AI Act, high‑risk AI systems don’t just carry a fancy label: they trigger a comprehensive set of rules designed to protect patients, healthcare professionals, and fundamental rights.

For AI‑enabled medical devices, this means you’re not merely updating software—you’re managing a living, learning system that can change over time, which brings new safety, bias, and traceability challenges.

To keep pace, you must weave AI‑specific processes into your established MDR/IVDR quality management system (QMS), ensuring that from the moment data enters your model to every post‑market software update, risks are anticipated, documented, and mitigated.

Only by treating AI as a first‑class citizen in your QMS can you guarantee robust performance, clear accountability, and smooth conformity assessment with both MDR/IVDR and the AI Act.

High‑risk AI systems must satisfy the following core requirements under Chapter III, Section 2 of the AI Act:

Risk & Quality Management

Data & Data Governance

Technical Documentation

Transparency & User Information

Human Oversight

Logging & Traceability

CE Marking & Registration

Timelines & Conformity Assessment

Manufacturers beware: from the moment the AI Act entered into force on 1 August 2024, the clock has been ticking.

Missing the key deadlines means risking market exclusion, overbooked Notified Bodies, and frantic last‑minute scrambles.

Here’s your countdown roadmap – stick to it, or pay the price:

1 August 2024:

The AI Act formally entered into force across the EU.

From this date, Member States begin the process of transposing and enforcing their provisions – Public Health.

2 August 2025:

2 August 2026:

2 August 2027:

Assessment Route

Conformity assessment for high‑risk medical‑device AI will follow your established MDR/IVDR procedures (Article 43(3)), now extended to cover AI Act requirements.

You should aim for a single, combined audit with your Notified Body—duplication of effort is the last thing anyone needs as deadlines loom.

Industry Recommendations

MedTech Europe and Team‑NB both stress that without swift, stakeholder‑driven guidance, manufacturers will be left guessing which AI Act requirements apply when (the Commission should publish detailed guidelines well before 2027, incorporating input from MDCG, notified bodies, and industry).

Equally important is the alignment of standards—developing harmonised, horizontal AI norms that integrate seamlessly with existing vertical medical‑device standards to avoid contradictory obligations.

And to top it off, the industry is calling for a single conformity procedure that bundles MDR/IVDR and AI Act assessments into one streamlined audit, sparing both manufacturers and Notified Bodies from needless duplication as deadlines loom.

Tips for AI Act Compliance in Your QMS

Conduct an AI‑Act Gap Analysis

Conducting an AI‑Act gap analysis is like running a targeted audit on steroids: you take your existing MDR/IVDR Technical File and QMS, lay it out side by side with every high‑risk AI requirement in the AI Act (from risk management in Article 9 through data governance in Article 10 to transparency obligations in Article 13), and systematically tick off what you already cover—and, more importantly, what you don’t.

Start by drafting a simple cross‑reference matrix: list each AI Act clause in one column and your corresponding procedure or document in the next.

When you hit an empty cell—no record of logging model inputs, no documented dataset provenance, no clear user‑information template—that’s your gap.

For each gap, assign a priority (e.g., “must have before conformity assessment” vs “can update in next release”) and immediately sketch out an owner-and-deadline plan.

Don’t forget to include artifacts beyond the Technical File: update your FMEAs with AI‑specific hazards, review your change‑control protocols to capture model retraining, and ensure your QMS’s document‑control module can version and archive dataset snapshots.

By the end of this exercise, you’ll have a clear, actionable roadmap that transforms “AI‑Act compliance” from a murky ambition into a series of concrete tasks—and you’ll avoid that panicked “why didn’t we do this sooner?” conversation with your Notified Body.

Update Your Risk Management File

Your risk management file is no longer a static list of checkboxes – Article 9 of the AI Act requires a living, AI‑tailored risk management system that continuously identifies, evaluates, and mitigates hazards unique to machine learning, such as model drift, algorithmic bias, and cybersecurity vulnerabilities.

As Team‑NB makes clear, you must ensure your risk and quality management processes are fully compliant with Articles 9 and 17, extending your FMEAs to cover mispredictions, unintended correlations, and privacy infringements at every stage of the AI lifecycle.

Practically, this means any change in your training data, model architecture, or deployment environment must automatically trigger a revision of your hazard logs, with severity and probability ratings updated in line with real‑world performance metrics.

Embed these triggers into your change‑control workflow so that no software update or dataset refresh slips through unassessed.

By treating risk management as a dynamic, end‑to‑end process—from data ingestion through post‑market monitoring—you not only satisfy the AI Act but also safeguard patient safety against the unpredictable quirks of adaptive algorithms.

Strengthen Data Governance

Strengthening your data governance isn’t just a nice‑to‑have—it’s a legal mandate under Article 10 of the AI Act, which insists that “training, validation and testing data sets shall be relevant, sufficiently representative, and … free of errors and complete given the intended purpose” (Art 10(3)).

In practice, this means you need rigorous version control and provenance tracking so that every model output can be traced back to the exact data snapshot that produced it.

Notified Bodies are explicitly empowered (Annex VII, 4.3) to demand full access to these datasets during conformity assessment, so any gaps in your governance will slow—or even block—approval.

Finally, don’t overlook privacy: your GDPR‑compliant consent forms and data‑use agreements must cover not only the initial training but also any subsequent retraining or NB‑led testing. Implement secure transfer protocols and clear data‑protection agreements with third parties to ensure that when an auditor comes knocking, you can hand over every dataset’s provenance report without breaking a sweat.

Embed Transparency by Design

Embedding transparency by design means turning your end‑user materials—like the Instructions for Use (IFUs), online help pages, and training modules—into a clear window on how your AI “thinks.”

Article 13 of the AI Act explicitly requires high‑risk systems to come with concise information about the provider (name, contact), the system’s intended purpose, its capabilities and limitations, any known risks, and detailed instructions on interpreting outputs and maintaining the system (Artificial Intelligence Act).

In practice, this means your IFUs shouldn’t just list “AI inside”—they must explain, in everyday language, what data the model needs, how it processes that data, its sensitivity and specificity under different conditions, examples of likely failure modes and what actions the clinician should take if the AI flag doesn’t match their clinical judgment.

You can—and should—leverage your existing MDR/IVDR documentation frameworks to house these transparency artifacts, integrating AI‑specific sections into the same technical dossier you already use for safety and performance information (MedTech Europe).

By baking these user‑focused explanations into every release note, e‑learning slide, and popup tooltip, you not only tick the AI Act’s box but also build clinician trust—because nothing is more reassuring than knowing exactly how (and when) your AI might trip up.

Plan for Post‑Market Surveillance

Planning for post‑market surveillance under the AI Act means you must transform your PMS from a static, checkbox exercise into a dynamic feedback engine that continuously gauges how your AI behaves in the real world.

Article 72 of the AI Act mandates that every provider of a high‑risk AI system “shall establish and document a post‑market monitoring system … proportionate to the nature of the AI technologies and the risks” and “actively and systematically collect, document and analyse relevant data … throughout their lifetime”.

In practical terms, this requires integrating automated logging (as per Article 12’s recording obligations) into your device so that every model input, decision, and anomaly is captured.

Those logs feed directly into your PMS and vigilance processes, flagging drifts in performance, unexpected biases, or cybersecurity events—each iteration prompting a review that may trigger retraining, software patches, or updated risk assessments.

Crucially, by February 2026, the Commission will provide a template for the post‑market monitoring plan, which becomes part of your technical documentation, but you should start designing your system now so you aren’t scrambling when templates arrive.

Aligning these AI‑specific surveillance measures with your existing MDR vigilance duties ensures you meet dual conformity: you demonstrate not only device safety under MDR/IVDR but also continuous compliance with the AI Act’s novel lifecycle demands.

Engage Early with Notified Bodies

Under Article 43(3) of the AI Act, an integrated conformity assessment is not a suggestion but a requirement: your Notified Body must evaluate both MDR/IVDR and AI Act criteria in a single procedure.

Yet, as Team‑NB cautions, building NB capacity and securing formal designation at the national level can be a lengthy process, with the risk that few bodies will be ready by 2 August 2027.

At the same time, if your device falls into Class IIa or higher, the moment you involve a Notified Body, it automatically assumes responsibility for assessing AI Act obligations as part of its review.

To avoid the all‑too‑common late‑stage panic—“Oops, our NB isn’t authorized for AI Act scope!”—start conversations early: share your AI pipeline, draft Technical Documentation, data‑access protocols and risk management strategies with your NB now, align on a combined audit schedule, and agree on dataset‐provenance review workflows. That way, you secure a streamlined, single audit rather than a frantic two‐step scramble at the eleventh hour.

Monitor Evolving Guidance

The AI Act itself makes clear that detailed implementation guidance and harmonised standards are still on the way, so you can’t simply “set and forget” your QMS and walk away.

Article 96 explicitly tasks the European Commission (and by extension the Medical Devices Coordination Group) with issuing practical implementation guidelines—for example, how to interpret high‑risk obligations or align them with MDR/IVDR procedures—yet those guidelines won’t land all at once.

Likewise, Article 40 calls for harmonised CEN/CENELEC standards that would give you a presumption of conformity, but early requests made in May 2023 suggest delays are likely and full standardisation may not be in place by the 2027 deadline.

MedTech Europe has underlined the need for “robust implementation guidelines … providing necessary clarity, guaranteeing alignment with existing legislation, and streamlining administrative requirements” to prevent manufacturers from floundering in regulatory ambiguity.

In practice, this means assigning one of your QMS champions to monitor Commission and MDCG publications weekly, reviewing new draft guidance or standardisation updates the moment they appear, and baking those insights directly into your procedures, technical files and training materials—so when the official documents finally arrive, you’ve already anticipated their impact and can update your system without a last‑minute scramble.

Telecharge la Checklist for AI Medical Devices

To conclude

In this fast‑evolving regulatory landscape, compliance with the EU AI Act is no longer optional for medical‑device manufacturers—it’s a strategic imperative.

From robust risk management and data governance to transparent user information and comprehensive post‑market surveillance, every step in your QMS must reflect the AI Act’s high‑risk requirements. With hard deadlines looming, early engagement with Notified Bodies and proactive monitoring of MDCG guidance and harmonised standards will ensure your AI‑enabled medical devices achieve CE marking smoothly and maintain market access.

At Easy Medical Device, we specialize in end‑to‑end AI Act compliance for medical devices. Our experts conduct full AI‑Act gap analyses, update your risk management and data‑governance frameworks, embed transparency‑by‑design into your IFUs, and set up dynamic post‑market surveillance systems. Whether you need a combined MDR/IVDR and AI Act conformity assessment strategy or hands‑on support drafting Technical Documentation, our team is ready to guide you.

Ready to turn AI compliance from a daunting checklist into a competitive advantage? Contact Easy Medical Device today to schedule your tailored AI Act readiness evaluation.