Deployer
An organization that uses an AI system under its authority, except for personal non-professional use.
Definition
A deployer under the EU AI Act is any natural or legal person, public authority, agency, or other body that uses an AI system under its authority, excluding use for purely personal, non-professional purposes. This definition is broader than many organizations realize—if you are using a third-party AI tool in your business operations, you are likely a deployer with specific legal obligations, even though you did not build the system.
The EU AI Act establishes distinct obligations for deployers separate from those imposed on providers (the entities that develop and market AI systems). While provider obligations are more extensive, deployer obligations are far from trivial. Article 26 sets out deployer responsibilities including implementing human oversight measures, monitoring system operation, keeping logs, conducting Fundamental Rights Impact Assessments for certain use cases, and informing affected individuals about AI system use. Many organizations focus exclusively on provider obligations without recognizing their deployer responsibilities. A bank using a third-party AI tool for credit scoring is a deployer. A hospital using an AI diagnostic assistant is a deployer. An HR department using an AI screening tool for job applicants is a deployer. Each has compliance obligations regardless of whether they built the underlying AI.
Article 26 requires deployers of high-risk AI systems to: use the system in accordance with its instructions for use, ensure human oversight by persons with appropriate competence and authority, monitor the AI system's operation and inform the provider of any serious incidents, keep logs automatically generated by the system for at least six months, conduct a Fundamental Rights Impact Assessment before deploying certain high-risk systems in public sector or essential services contexts, and inform natural persons that they are subject to decisions made by a high-risk AI system.
Critically, a deployer becomes a provider if they put their name or trademark on a high-risk AI system, make a substantial modification to it, or modify its intended purpose. Organizations that customize or fine-tune third-party AI systems should carefully assess whether these modifications cross the threshold into provider status.
Related Terms
Provider
An entity that develops or has an AI system developed and places it on the market or puts it into service.
High-Risk AI System
An AI system subject to strict requirements under the EU AI Act due to its potential impact on health, safety, or fundamental rights.
Human Oversight
Mechanisms ensuring humans can monitor, intervene in, and override AI system operations when necessary.
