Post-market Monitoring Plan Template (Article 72 compliant)
Download an Article 72 compliant post-market monitoring plan template covering system identification, monitoring objectives, data collection, performance metrics, alerting, review procedures, incident response, and documentation.
Generate a reviewable post-market monitoring plan in 30-60 minutes.
For compliance, risk, product, and ML ops teams shipping agentic workflows into regulated environments.
Last updated: Dec 16, 2025 · Version v1.0 · Fictional sample. Not legal advice.
Report an issue: /contact
What this artifact is (and when you need it)
Minimum viable explanation, written for audits, not for theory.
The EU AI Act Article 72 requires providers of high-risk AI systems to establish and document a post-market monitoring system. This is not optional. It is a compliance requirement with specific mandates for how you monitor AI systems after deployment.
This template provides a structured 8-section approach covering system identification, monitoring objectives, data collection, performance metrics, alerting, review procedures, incident response, and documentation.
You need it when
- You are deploying an agent into a regulated workflow (credit, claims, KYC/AML, HR).
- You need to prove ongoing quality, safety, and policy compliance after go-live.
- You are preparing an Annex IV dossier or an audit readiness review.
Common failure mode
A plan that lists metrics but has no thresholds, no named owners, no alert escalation procedures, and no incident response workflow tied to exportable evidence.
What good looks like
Acceptance criteria reviewers actually check.
- System identification links to regulatory classification and Annex IV documentation.
- Monitoring objectives connect to identified risks with clear success criteria.
- Data collection covers all sources with quality requirements and privacy considerations.
- Performance metrics include technical, drift, and fairness thresholds with severity levels.
- Alerting defines severity levels, notification matrix, and escalation procedures.
- Review procedures include continuous monitoring, sampling, and periodic governance reviews.
- Incident response covers definition, immediate response, investigation, and Article 73 reporting.
- Documentation and evidence storage ensures tamper-proof retention with audit readiness.
Template preview
A real excerpt in HTML so it's indexable and reviewable.
## Section 4: Performance Metrics ### 4.1 Technical Performance Metrics | Metric | Definition | Threshold | Alert Level | |--------|------------|-----------|-------------| | [Accuracy] | [% correct predictions] | [>95%] | [Critical if <90%] | ### 4.2 Drift Metrics | Metric | Calculation Method | Threshold | Check Frequency | |--------|-------------------|-----------|-----------------| | [Feature drift] | [PSI or KL Divergence] | [PSI <0.1] | [Daily] | ## Section 5: Alerting and Escalation ### 5.1 Alert Severity Levels | Level | Criteria | Response Time | |-------|----------|---------------| | Critical | [Immediate risk, compliance violation] | [15 minutes] | | High | [Significant degradation] | [4 hours] |
How to fill it in (fast)
Inputs you need, time to complete, and a miniature worked example.
Inputs you need
- System identification with regulatory classification and Annex IV references.
- Monitoring objectives connected to risk register with success criteria.
- Data sources, collection frequency, quality requirements, and privacy considerations.
- Performance thresholds (technical, drift, fairness) with severity levels.
- Alerting configuration with escalation matrix and after-hours coverage.
- Review procedures (continuous, sampling, periodic) with governance integration.
- Incident response procedures including Article 73 regulatory reporting.
Time to complete: 30-60 minutes for a defensible v1.
Mini example: alert severity
Alert Severity Levels: | Level | Criteria | Response Time | |-------|----------|---------------| | Critical | System down, accuracy <90%, discriminatory outcome | 15 minutes | | High | Accuracy <95%, drift threshold exceeded | 4 hours | | Medium | Approaching threshold, unusual pattern | 24 hours | | Low | Minor metric movement, informational | Next business day |
How KLA generates it (Govern / Measure / Prove)
Tie the artifact to product primitives so it converts.
Govern
- Policy-as-code checkpoints that block or require review for high-risk actions.
- Versioned change control for model/prompt/policy/workflow updates.
Measure
- Risk-tiered sampling reviews (baseline + burst during incidents or after changes).
- Near-miss tracking (blocked / nearly blocked steps) as a measurable control signal.
Prove
- Hash-chained, append-only audit ledger with 7+ year retention language where required.
- Evidence Room export bundles (manifest + checksums) so auditors can verify independently.
FAQs
Written to win snippet-style answers.
