EU AI Act Compliance for Healthcare
A comprehensive guide for Hospital CIOs, Health System Compliance Officers, and MedTech companies navigating the intersection of EU AI Act and Medical Device Regulation (MDR) requirements.
Key Takeaways
Essential points for EU AI Act compliance in this industry
Diagnostic AI is typically high-risk AND a medical device
These systems face dual regulatory requirements under both MDR and EU AI Act
Clinical decision support varies by implementation
Systems that inform but do not direct clinical decisions may have different classifications
Administrative AI is usually minimal risk
Scheduling, billing, and operational AI typically falls outside high-risk categories
MDR conformity assessment may satisfy EU AI Act
For AI that qualifies as a medical device, the pathways can be coordinated
Clinician-in-the-loop is mandatory for high-risk
Article 14 human oversight requirements align with clinical safety expectations
Recommended Action Timeline
Prioritized steps to achieve EU AI Act compliance by August 2026
Q1 2026
- Inventory AI systems
- Classify by risk level
- Determine MDR vs. EU AI Act applicability
Q2 2026
- Coordinate conformity assessment pathways
- Implement clinical oversight workflows
- Begin documentation
Q3 2026
- Complete technical documentation
- Implement evidence collection
- Conduct audit readiness review
August 2026
- High-risk system compliance deadline (Annex III systems)
In This Guide
Healthcare AI classification
Healthcare AI presents unique classification challenges because systems may be regulated as medical devices, high-risk AI systems under the EU AI Act, or both. Understanding these overlapping categories is essential for compliance planning.
| AI Use Case | EU AI Act Classification | MDR Applicable? |
|---|---|---|
| Diagnostic imaging AI | High-risk | Yes (Class IIa-III) |
| Pathology analysis | High-risk | Yes (Class IIa-III) |
| Treatment recommendation | High-risk | Likely yes |
| Surgical planning AI | High-risk | Yes |
| Drug dosing AI | High-risk | Likely yes |
| Reference CDS (informational) | Likely not high-risk | Possibly |
| Patient scheduling | Not high-risk | No |
| Billing/coding AI | Not high-risk | No |
| General health chatbot | Limited risk | No |
| Symptom triage chatbot | Potentially high-risk | Possibly |
Diagnostic AI: Typically high-risk and medical device
AI systems used for diagnosis, screening, or detection of disease are typically medical devices under MDR (if they provide diagnostic information used in patient care) and high-risk under EU AI Act Annex III (category 6(a) covers AI intended for use as medical devices).
- Radiology AI for detecting tumors or abnormalities
- Pathology AI for analyzing tissue samples
- Screening tools for disease risk assessment
- Diagnostic support for specific conditions
Clinical decision support: Depends on intended purpose
Clinical decision support (CDS) systems occupy a spectrum from minimal risk to high-risk depending on their design and intended purpose.
Likely high-risk: CDS that provides treatment recommendations, systems that generate differential diagnoses, AI that recommends medication dosing, tools that guide surgical planning.
Potentially NOT high-risk: CDS that provides reference information, systems that surface relevant literature without recommendations, tools that assist with documentation without clinical guidance.
Administrative AI: Usually minimal risk
Administrative and operational AI typically falls outside high-risk categories: scheduling optimization (not high-risk unless it affects clinical priority), billing and coding (administrative function), resource allocation (may be high-risk if it affects patient access to care).
Watch areas include patient flow systems that affect care delivery timing, resource allocation that determines who receives treatment, and triage systems that affect clinical prioritization.
MDR and EU AI Act coordination
When AI qualifies as a medical device, the relationship between MDR and EU AI Act is crucial for compliance planning.
When AI is a medical device
AI qualifies as a medical device when it is intended for diagnosis, prevention, monitoring, treatment, or alleviation of disease, provides information used for medical purposes, and falls within the MDR definition of medical device.
MDR Classification: Class I (lowest risk, self-certification), Class IIa (lower moderate risk, Notified Body required), Class IIb (higher moderate risk), Class III (highest risk). Most diagnostic and clinical AI falls into Class IIa or higher.
Coordinated conformity assessment
Article 6(1) of the EU AI Act provides that high-risk AI systems that are medical devices can satisfy EU AI Act requirements through MDR conformity assessment that incorporates EU AI Act requirements, integrated quality management system covering both regulations, and single Notified Body assessment addressing both frameworks.
Gap analysis: MDR vs. EU AI Act
Key gaps to address beyond MDR requirements:
- Human oversight documentation: MDR covers clinical validation; EU AI Act requires documented human oversight mechanisms
- Bias and fairness: MDR focuses on clinical safety; EU AI Act adds fairness and discrimination concerns
- Evidence integrity: EU AI Act requires verifiable evidence with integrity mechanisms
- Transparency to patients: EU AI Act has specific transparency requirements for AI systems
Clinical safety and human oversight
Article 14 human oversight requirements intersect with existing clinical governance expectations. For healthcare AI, this means formalizing clinician-in-the-loop processes.
Clinician-in-the-loop requirements
EU AI Act requires that humans can understand the AI system's capabilities and limitations, monitor the AI system's operation, interpret the AI system's outputs correctly, decide to disregard, override, or reverse AI outputs, and intervene in the operation or stop the system.
Healthcare implementation includes clinical validation of AI outputs before action, clear display of AI confidence levels and limitations, override mechanisms for clinician judgment, and documentation of clinical review and decisions.
Override and escalation workflows
Design workflows for clinical AI that ensure meaningful human oversight with risk-based routing:
| AI Output Type | Clinical Oversight |
|---|---|
| High confidence, routine | Clinician review, may accept without modification |
| Borderline or uncertain | Detailed clinician review required |
| Unexpected or outlier | Senior clinician or specialist review |
| Disagrees with clinical judgment | Escalation with documented rationale |
Clinical validation evidence
For EU AI Act compliance, maintain evidence of training and competency (clinician training on AI system use, understanding of limitations), operational oversight (review rates and patterns, override frequency and reasons, outcome tracking), and quality management (regular review of AI performance, incident tracking, continuous improvement).
Implementation for health systems
Health systems face the challenge of governing AI systems they deploy but do not develop. Understanding deployer obligations is crucial.
Vendor assessment for AI tools
Before deploying third-party AI, assess EU AI Act compliance status (has the provider completed conformity assessment? Is CE marking in place?), human oversight support (does the system support clinical review workflows? Can clinicians override?), and evidence and audit requirements (what logging does the system provide? How is evidence retained?).
Contract requirements for AI vendors
Include in vendor contracts: compliance obligations (confirmation of EU AI Act conformity assessment, commitment to maintain compliance), evidence and documentation (access to technical documentation for audits, log retention, evidence export), and incident management (notification requirements, cooperation with regulatory inquiries, liability allocation).
Internal AI development governance
For health systems developing proprietary AI: development governance (AI ethics review process, clinical safety assessment, bias evaluation, documentation), deployment governance (clinical validation requirements, pilot and rollout procedures, monitoring), and lifecycle governance (change management, performance monitoring, incident response, decommissioning).
Evidence collection and audit readiness
Healthcare regulators and accreditation bodies will increasingly ask about AI governance. Prepare evidence that satisfies multiple stakeholders.
What healthcare regulators will ask
Expect questions on clinical safety (How do clinicians validate AI recommendations? What adverse events have occurred?), compliance (Which AI systems are medical devices? Have you completed conformity assessment?), governance (Who is accountable for AI governance? What training do clinicians receive?), and evidence (Can you show the AI role in specific patient decisions? How do you ensure evidence integrity?).
Multi-stakeholder evidence requirements
Build unified evidence collection that serves healthcare regulators (patient safety, clinical validation), EU AI Act authorities (conformity assessment, human oversight), MDR authorities (medical device compliance, post-market surveillance), accreditation bodies (quality management, clinical governance), and internal audit (control effectiveness, risk management).
Evidence retention and integrity
Healthcare-specific considerations: retention requirements (patient records typically 10+ years, medical device records 10+ years after last device on market, aligned with EU AI Act requirements), evidence integrity (tamper-evident storage for AI decision records, integration with clinical documentation systems, cryptographic verification, chain of custody documentation).
Implementation Checklist
Track your progress toward EU AI Act compliance with these prioritized action items
90-day priorities (Q1 2026)
- Inventory all AI systems in clinical and operational use
- Classify each by EU AI Act risk level
- Determine MDR applicability for each system
- Document classification rationale
- Assign executive accountability for AI governance
- Establish or extend AI governance committee
- Define clinical oversight roles and responsibilities
- Review AI vendor contracts for compliance obligations
- Assess vendor EU AI Act and MDR compliance status
180-day priorities (Q2 2026)
- Complete Annex IV documentation (or extend MDR technical files)
- Document clinical oversight procedures
- Create fundamental rights impact assessments
- Update risk management documentation
- Implement clinical review workflows for AI outputs
- Deploy evidence capture at decision points
- Configure performance and fairness monitoring
- Establish integrity verification for evidence
- Develop clinician training programs
- Integrate AI review into clinical governance
365-day priorities (Q3-Q4 2026)
- Complete EU AI Act conformity assessment
- Coordinate with MDR conformity assessment where applicable
- Address assessment findings
- Complete Declaration of Conformity
- Generate sample evidence packages
- Conduct audit simulation with clinical and compliance teams
- Test evidence integrity verification
- Establish post-market monitoring processes
- Define incident reporting procedures
- Create compliance monitoring dashboards
Ready to implement EU AI Act compliance?
KLA Digital provides the runtime governance layer healthcare organizations need for EU AI Act compliance: policy checkpoints, approval queues, and audit-ready evidence exports.
Last updated: gennaio 2026. This guide provides general information about EU AI Act compliance for healthcare. Organizations should consult legal counsel for advice specific to their circumstances.
