KLA Digital Logo
KLA Digital
EU AI Act
Updated: Jan 13, 2026

High-Risk AI System

An AI system subject to strict requirements under the EU AI Act due to its potential impact on health, safety, or fundamental rights.

Definition

A high-risk AI system is an artificial intelligence application that the EU AI Act identifies as posing significant potential risks to health, safety, or fundamental rights, and therefore subjects to comprehensive regulatory requirements. This classification triggers extensive obligations for providers and deployers, including technical documentation, conformity assessment, human oversight measures, and post-market monitoring.

The high-risk designation is the central regulatory concept in the EU AI Act, determining which AI systems face the most stringent compliance obligations. Systems classified as high-risk must comply with Articles 8 through 15, covering risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. Failure to comply with these requirements can result in fines up to 35 million euros or 7% of global annual turnover.

There are two pathways to high-risk classification. First, Annex III lists eight categories of AI applications deemed inherently high-risk: biometric identification and categorization, critical infrastructure management, education and vocational training, employment and worker management, access to essential services and benefits, law enforcement, migration and border control, and administration of justice. Second, AI systems that serve as safety components of products covered by EU product safety legislation (listed in Annex I) are also classified as high-risk. Notably, Article 6(3) provides an exception: even if an AI system falls within an Annex III category, it may not be considered high-risk if it performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing human assessment, or performs only preparatory tasks for assessments.

Organizations must first determine whether their AI systems qualify as high-risk. This requires careful analysis of the system's intended purpose, not just its technical capabilities. A credit scoring model that influences lending decisions is clearly high-risk under Annex III(5)(b), while a customer service chatbot providing general information typically is not, unless it affects access to essential services. Once classified as high-risk, organizations face substantial compliance obligations. Providers must establish risk management systems, implement data governance practices, create extensive Annex IV technical documentation, design for human oversight, ensure accuracy and robustness, and conduct conformity assessments before market placement. Deployers have separate obligations including implementing human oversight measures, monitoring system operation, and conducting Fundamental Rights Impact Assessments in certain contexts. The compliance timeline is critical: Annex III high-risk systems must comply by August 2026, while Annex I systems (product safety components) have until August 2027.