EU AI Act
Last updated: Dec 15, 2025 · 6 min
Article 5 prohibited AI practices (operational checklist)
A fast filter for “stop-ship” risks, how to remove them, and what evidence to keep.
Orientation only. Not legal advice.
Who this matters for
Product, engineering, and compliance teams shipping AI features into the EU market.
What you’ll leave with
A practical checklist to find prohibited patterns and document remediation.
Fast “stop-ship” filter (orientation)
- Does the system use manipulative techniques that could cause harm?
- Does it exploit vulnerabilities of specific groups (e.g., children) in a way that could cause harm?
- Does it perform sensitive biometric categorization or emotion recognition in restricted contexts?
- Does it enable high-risk biometric identification uses without the required constraints?
- If any answer is “maybe”: escalate and document the decision path.
Operational remediation steps
- Write a prohibited-use policy and encode it as a “fail-closed” gate in CI/runtime.
- Add unit tests for prohibited conditions so they cannot regress silently.
- Document the feature removal/redesign with approvals and release notes.
- Retain evidence of the change: tickets, PRs, policy version bumps, deployment logs.
Evidence artifacts to keep
- Feature inventory and risk review notes
- Policy-as-code pack and change history
- Approval records and decision rationale
- Release artifacts proving the prohibited path is removed
Next step: artifacts
Compliance work gets funded when the output is forwardable. Use the starter templates to convert obligations into controls and evidence.
Govern · Measure · Prove
Need a defensible evidence path?
KLA Digital turns obligations into controls, controls into measurements, and measurements into exportable evidence.
