KLA Digital Logo
KLA Digital
Technical
Updated: Jan 13, 2026

Drift Detection

Monitoring AI system performance over time to identify degradation or deviation from expected behavior.

Definition

Drift detection is the practice of continuously monitoring AI systems in production to identify when their behavior deviates from expected baselines. AI systems can degrade over time as the real-world data they encounter diverges from their training data, as user behavior evolves, or as the underlying relationships they model shift. Without drift detection, organizations may not realize their AI systems are producing unreliable or biased outputs until significant harm has occurred.

Article 72 of the EU AI Act requires providers of high-risk AI systems to establish post-market monitoring systems that actively collect, document, and analyze data on system performance throughout its lifetime. Drift detection is a core technical capability for meeting this requirement—you cannot monitor whether a system continues to perform as intended without mechanisms to detect when it does not. The regulation recognizes that AI systems are not static. Unlike traditional software that behaves identically given identical inputs, AI systems can exhibit emergent behavior changes as production data patterns evolve. Conformity assessment at the point of market placement does not guarantee ongoing compliance. Organizations must demonstrate continuous monitoring throughout the AI system lifecycle.

Effective drift detection addresses multiple forms of deviation: Data drift occurs when the statistical distribution of input data changes from what the model was trained on. Concept drift happens when the relationship between inputs and the correct outputs changes over time. Model performance drift refers to when accuracy, precision, recall, or other performance metrics degrade. Prediction drift occurs when the distribution of model outputs changes, even if accuracy metrics remain stable.

Drift detection requires establishing baselines during model validation, implementing statistical monitoring in production (using techniques such as population stability index, Kolmogorov-Smirnov tests, or distribution divergence measures), setting appropriate alert thresholds, and creating response procedures for when drift is detected.