KLA Digital Logo
KLA Digital
Back to Blog
EU AI ActJanuary 6, 202520 min read

How to Classify High-Risk AI Systems Under the EU AI Act

The definitive classification methodology: two pathways to high-risk status, the Article 6(3) exception framework, the profiling override that eliminates all exceptions, and a step-by-step audit-ready classification process.

Antonella Serine

Antonella Serine

Founder

Risk classification is the single most consequential decision in EU AI Act compliance. Classify correctly and you navigate a defined compliance pathway with clear requirements. Classify incorrectly and you face cascading failures - either wasting resources on unnecessary obligations or, far worse, discovering enforcement exposure only after deployment. With full high-risk requirements taking effect on August 2, 2026, organizations must master this classification framework now.

The Fundamental Architecture: Two Pathways to High-Risk Classification

Article 6 of the EU AI Act establishes two independent pathways through which an AI system becomes high-risk. Understanding these pathways is essential because they trigger different compliance timelines and conformity assessment procedures.

  • Pathway 1 (Article 6(1)): AI systems intended as safety components of products covered by Annex I harmonization legislation requiring third-party conformity assessment
  • Pathway 2 (Article 6(2)): AI systems deployed in eight specific Annex III use case categories where potential for harm is inherently elevated

Pathway 1: Safety Components in Regulated Products

An AI system is automatically classified as high-risk when two cumulative conditions are met. First, the AI system must be intended to be used as a safety component of a product, or the AI system must itself be a product, covered by EU harmonization legislation listed in Annex I. Second, the product must be required to undergo third-party conformity assessment under that legislation.

Annex I encompasses more than thirty directives and regulations covering machinery, toys, medical devices, in vitro diagnostics, civil aviation, motor vehicles, marine equipment, rail systems, and more.

  • AI-powered surgical robots covered by the Medical Devices Regulation
  • Autonomous vehicle lane-keeping and emergency braking AI under vehicle safety regulations
  • AI-controlled flight systems subject to EU aviation safety rules
  • AI systems monitoring pressure in manufacturing plants under machinery safety legislation

Pathway 2: The Eight Annex III Categories

The second pathway captures AI systems deployed in eight specific use case categories where potential for harm to health, safety, or fundamental rights is deemed inherently elevated. These categories operate independently of whether the AI system is embedded in a regulated product.

  • Category 1 - Biometrics: Remote biometric identification, biometric categorization inferring sensitive attributes, emotion recognition
  • Category 2 - Critical Infrastructure: Safety components in digital infrastructure, road traffic, water, gas, heating, electricity supply
  • Category 3 - Education: Access/admission determination, learning outcome evaluation, exam proctoring
  • Category 4 - Employment: Recruitment, selection, promotion, termination, task allocation, performance monitoring
  • Category 5 - Essential Services: Public benefits eligibility, creditworthiness assessment, insurance risk pricing, emergency dispatch triage
  • Category 6 - Law Enforcement: Victim risk assessment, evidence reliability evaluation, recidivism risk assessment, criminal profiling
  • Category 7 - Migration: Risk assessment for border entry, asylum/visa application assessment, individual detection and identification
  • Category 8 - Justice and Democracy: Judicial research assistance, election/referendum influence tools

The Exception Framework: When Annex III Systems Escape High-Risk

Article 6(3) creates a narrow escape valve for systems that nominally fall within Annex III categories but demonstrably pose no significant risk of harm. This derogation applies only when the AI system does not pose a significant risk of harm to health, safety, or fundamental rights, including by not materially influencing the outcome of decision-making.

  • Condition 1: AI system performs a narrow procedural task - limited, well-defined administrative functions without discretionary judgment
  • Condition 2: AI system improves the result of a previously completed human activity - analytics helping humans review their own prior work
  • Condition 3: AI system detects decision-making patterns without replacing or influencing human assessment without proper review
  • Condition 4: AI system performs a preparatory task to a relevant assessment - preliminary data gathering feeding into subsequent human judgment

The Profiling Override: The Exception That Eliminates All Exceptions

Here is where many organizations make catastrophic classification errors. Article 6(3) contains an absolute override: regardless of whether any exception conditions are met, an AI system referred to in Annex III shall always be considered high-risk where the AI system performs profiling of natural persons.

Profiling means any automated processing of personal data to evaluate certain personal aspects relating to a natural person - particularly analyzing or predicting performance at work, economic situation, health, preferences, interests, reliability, behavior, location, or movements.

This override has sweeping implications. A recruitment tool might perform only a narrow procedural task of parsing resumes - but if it evaluates candidates to predict job performance, it profiles natural persons and is automatically high-risk. A credit pre-qualification tool might merely perform preparatory assessments - but if it analyzes personal data to predict creditworthiness, it profiles and is automatically high-risk.

Step-by-Step Classification Methodology

Given the complexity of the classification framework, organizations need a systematic approach that documents each analytical step for audit-ready classification.

  • Step 1 - Scope Determination: Confirm the system meets the Article 3(1) AI system definition
  • Step 2 - Prohibited Practices Screen: Verify system is not prohibited under Article 5
  • Step 3 - Pathway 1 Analysis: Assess safety component status under Annex I legislation
  • Step 4 - Pathway 2 Analysis: Match intended purpose against eight Annex III categories
  • Step 5 - Exception Analysis: If Annex III applies, evaluate Article 6(3) conditions
  • Step 6 - Profiling Override Check: Determine if system profiles natural persons (eliminates all exceptions)
  • Step 7 - Documentation: Record complete classification analysis with supporting reasoning

Common Classification Errors to Avoid

Research and advisory experience reveals consistent patterns of classification mistakes that organizations must actively guard against.

  • Technology-centric analysis: Classification depends on intended purpose, not underlying technology capabilities
  • Underestimating profiling scope: Any system processing personal data to make predictions about individuals likely triggers the override
  • Assuming human-in-the-loop eliminates high-risk: If AI outputs strongly influence human decisions, system remains high-risk
  • Ignoring use case evolution: Feature additions and expanded deployment can cross classification thresholds
  • Misapplying sector carve-outs: Narrow exceptions (e.g., fraud detection) may not apply when profiling override is triggered
  • Assuming B2B reduces risk: Business deployment does not affect classification - affected individuals remain protected

What High-Risk Classification Triggers

High-risk classification activates the full Chapter III compliance framework: continuous risk management systems, data governance requirements for training datasets, Annex IV technical documentation, quality management systems, human oversight obligations, conformity assessment procedures, EU database registration, post-market monitoring, and serious incident reporting.

The compliance burden is substantial but defined. Organizations that accurately classify their systems can build targeted compliance programs. Organizations that misclassify face either wasted resources on unnecessary compliance or enforcement exposure for inadequate compliance.

Frequently Asked Questions

What if my system partially fits a high-risk category?

For borderline cases where classification remains uncertain, the conservative approach is treating the system as high-risk. The consequences of over-classification - unnecessary but manageable compliance costs - are far less severe than under-classification: enforcement penalties reaching EUR 15 million or 3% of global turnover.

Does adding human review remove high-risk classification?

No. Human oversight does not automatically reduce classification. If AI outputs strongly influence human decisions - if humans typically follow AI recommendations without independent analysis - the system materially influences outcomes and remains high-risk.

Can a system change from non-high-risk to high-risk?

Yes. Systems initially deployed for limited-risk applications may evolve toward high-risk use cases. Feature additions, expanded deployment contexts, and integration with other systems can cross classification thresholds. Organizations need ongoing classification monitoring, not one-time assessments.

When will the Commission provide classification guidance?

The European Commission committed to providing guidelines with practical examples of high-risk and non-high-risk AI systems by February 2026. However, organizations cannot wait - classification analysis must begin now to meet the August 2026 deadline.

Key Takeaways

Classification is not a one-time exercise. As AI systems evolve, deployment contexts change, and regulatory guidance emerges, organizations must maintain ongoing classification monitoring. Building this capability now - before the August 2026 deadline - positions organizations to navigate the EU AI Act requirements with confidence. When uncertain, err on the side of caution: the controls required for high-risk systems are also good governance practices.

See It In Action

Ready to automate your compliance evidence?

Book a 20-minute demo to see how KLA helps you prove human oversight and export audit-ready Annex IV documentation.