AI Compliance Glossary
Definitions of key terms in AI governance, EU AI Act compliance, and regulatory frameworks for compliance officers and practitioners.
Compliance
Compliance processes and documentation requirements
Evidence Pack
A comprehensive bundle of documentation, logs, and artifacts that demonstrate AI system compliance for auditors.
Model Card
A standardized document describing an AI model's intended use, performance, limitations, and ethical considerations.
Post-Market Monitoring
Ongoing surveillance of AI system performance and compliance after deployment to identify and address issues.
System Card
Documentation describing a complete AI system including its architecture, components, data flows, and operational context.
EU AI Act
Key terms and concepts from the EU Artificial Intelligence Act
Annex III
The EU AI Act annex listing categories of high-risk AI systems subject to strict compliance requirements.
Annex IV
The EU AI Act annex specifying technical documentation requirements for high-risk AI systems.
CE Marking
A certification mark indicating that an AI system complies with EU health, safety, and environmental protection requirements.
Conformity Assessment
The process of evaluating whether an AI system meets all applicable EU AI Act requirements before market placement.
Deployer
An organization that uses an AI system under its authority, except for personal non-professional use.
FRIA
Fundamental Rights Impact Assessment - an evaluation of how an AI system may affect fundamental rights.
High-Risk AI System
An AI system subject to strict requirements under the EU AI Act due to its potential impact on health, safety, or fundamental rights.
Notified Body
An organization designated by an EU member state to assess conformity of high-risk AI systems.
Provider
An entity that develops or has an AI system developed and places it on the market or puts it into service.
AI Governance
Core AI governance concepts and frameworks
Technical
Technical concepts for AI system monitoring and control
Audit Trail
A chronological record of AI system activities, decisions, and human interactions that enables traceability and accountability.
Bias Detection
The process of identifying and measuring unfair or discriminatory patterns in AI system outputs or training data.
Drift Detection
Monitoring AI system performance over time to identify degradation or deviation from expected behavior.
Explainability
The ability to understand and communicate how an AI system reaches its outputs or decisions.
Guardrails
Technical constraints and policy controls that prevent AI systems from producing harmful or non-compliant outputs.
Ready to implement AI governance?
See how KLA Digital helps regulated organizations operationalize these concepts with policy checkpoints, audit trails, and evidence exports.
