Provider
An entity that develops or has an AI system developed and places it on the market or puts it into service.
Definition
A provider under the EU AI Act is any natural or legal person, public authority, agency, or other body that develops an AI system or has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge. Providers bear the most comprehensive compliance obligations under the regulation, as they are responsible for ensuring the AI system meets all applicable requirements from design through deployment.
The provider role carries the heaviest regulatory burden under the EU AI Act. Articles 8 through 25 establish extensive obligations for providers of high-risk AI systems, including implementing risk management systems, ensuring data governance, creating technical documentation, designing for human oversight, achieving appropriate levels of accuracy and robustness, conducting conformity assessments, and maintaining post-market monitoring. Many organizations become providers without fully recognizing it. The definition encompasses not only companies that train AI models from scratch but also those that substantially modify existing AI systems, integrate AI components into products or services under their own branding, or commission custom AI development from third parties. Using a foundation model from OpenAI, Anthropic, or another provider does not exempt an organization from provider status if they build an application layer on top and offer it to users. Understanding the distinction between provider and deployer is critical for compliance planning. Providers create and are responsible for the AI system itself; deployers use AI systems created by others under their own authority. The same organization can be a provider for some AI systems and a deployer for others, and obligations differ accordingly.
Organizations must first conduct an honest assessment of their provider status across all AI systems they develop, customize, or offer. Key questions include: Are you offering AI capabilities to others under your name or brand? Have you substantially modified a third-party AI system? Are you commissioning AI development that you will then deploy or distribute?
Provider obligations are extensive and resource-intensive. Risk management systems must be established and maintained throughout the AI system lifecycle. Technical documentation per Annex IV requires detailed records of design decisions, training data, testing results, and operational parameters. Quality management systems must ensure ongoing compliance. Human oversight mechanisms must be designed into the system. Conformity assessment, whether self-conducted or through a notified body, must be completed before market placement. Post-market monitoring and incident reporting processes must be operational. For organizations that are providers of high-risk AI systems, compliance cannot be an afterthought. The requirements demand integration of governance practices into the AI development lifecycle from inception. Documentation must be created contemporaneously with development, not reconstructed later. Testing must be comprehensive and recorded. Human oversight mechanisms must be architected into system design.
Related Terms
Deployer
An organization that uses an AI system under its authority, except for personal non-professional use.
High-Risk AI System
An AI system subject to strict requirements under the EU AI Act due to its potential impact on health, safety, or fundamental rights.
Conformity Assessment
The process of evaluating whether an AI system meets all applicable EU AI Act requirements before market placement.
