Post-Market Monitoring
Ongoing surveillance of AI system performance and compliance after deployment to identify and address issues.
Definition
Post-market monitoring is the systematic and continuous process of collecting, analyzing, and acting upon data about an AI system's performance, behavior, and impacts after it has been deployed into production. This ongoing surveillance enables organizations to detect degradation, identify emerging risks, respond to incidents, and maintain compliance throughout the system's operational lifecycle.
Article 72 of the EU AI Act requires providers of high-risk AI systems to establish a post-market monitoring system that is proportionate to the nature of the AI technology and the risks involved. This is not optional post-deployment observation but a mandatory compliance obligation with specific requirements for documentation and action. The regulation recognizes that AI systems behave differently in production than in controlled testing environments. Real-world data distributions shift over time, user behaviors evolve, and edge cases emerge that were not anticipated during development. Post-market monitoring provides the feedback loop necessary to detect these changes and respond before they result in harm or non-compliance. Crucially, post-market monitoring connects to the serious incident reporting obligations under Article 73. When monitoring reveals incidents or malfunctions that constitute a breach of fundamental rights obligations or serious risks to health, safety, or fundamental rights, providers must report to competent authorities within specified timeframes. Effective monitoring is therefore the foundation for timely incident detection and regulatory reporting.
Providers must document their post-market monitoring approach in a formal plan that addresses what data will be collected, how it will be analyzed, what thresholds trigger action, and who is responsible for review and response. This plan becomes part of the technical documentation required under Annex IV and must be updated as the system and its operational context evolve. Key monitoring dimensions include: performance metrics tracking accuracy, reliability, and consistency over time; drift detection identifying changes in input data distributions or model behavior; bias monitoring assessing fairness metrics across protected groups; incident tracking capturing errors, failures, and user complaints; and usage monitoring understanding how the system is actually being used versus intended use.
Organizations should establish clear escalation procedures linking monitoring findings to remediation actions. Minor performance degradation might trigger model retraining, while significant bias or safety issues could require system suspension pending investigation. Monitoring evidence, including the data collected, analyses performed, and actions taken, must be retained and made available to regulators upon request.
Related Terms
Drift Detection
Monitoring AI system performance over time to identify degradation or deviation from expected behavior.
Audit Trail
A chronological record of AI system activities, decisions, and human interactions that enables traceability and accountability.
High-Risk AI System
An AI system subject to strict requirements under the EU AI Act due to its potential impact on health, safety, or fundamental rights.
