Why the EU AI Act Applies to You
Even if your HQ is not in the EU
The most dangerous assumption in EU AI Act compliance is the one that organizations never examine closely enough to challenge: that the regulation only applies to companies headquartered in the EU.
This assumption is wrong. It is wrong in a way that could expose organizations to significant liability; and it is wrong in a way that is surprisingly easy to fall into, because the extraterritorial scope of the EU AI Act is not obvious from the regulation’s opening articles. You have to read carefully to understand it, and most organizations have not.
This is the second article in our ten-part series on AI governance. If you have not yet read The Fragmented World of AI Governance, that article provides the necessary context for understanding why the frameworks exist and how they relate to the EU AI Act. This article focuses specifically on who the EU AI Act applies to; and the answer is broader than many legal teams expect.
The Extraterritorial Architecture
The EU AI Act applies to three categories of organization, not two.
Providers are organizations that develop AI systems and place them on the market or put them into service in the EU; regardless of where those organizations are established. If you build an AI system anywhere in the world and you sell or provide that system to organizations or individuals in the EU, you are a provider under the EU AI Act. You bear the full set of provider obligations, including the requirements for high-risk AI systems (Articles 9–17) and, if your model exceeds the systemic risk threshold, the GPAI model obligations (Articles 53–55).
Deployers are organizations or individuals that use AI systems under their authority within the EU. If your company has operations in the EU (sales offices, distribution centers, subsidiaries, customer relationships) and you are using AI systems in those operations, you are likely a deployer. Deployer obligations are distinct from provider obligations, and they include requirements around human oversight, input data relevance, log retention, and incident reporting.
Importers and distributors complete the chain. Importers bring AI systems from outside the EU into the EU market. Distributors make AI systems available in the EU market. Both categories carry specific obligations.
The key point for organizations outside the EU is this: the place where your headquarters sits is not the determining factor. The determining factor is where you place AI systems on the market or where you deploy AI systems in the EU. An American company selling AI-powered software to European customers is a provider. A British company running an AI system in its German operations is a deployer. Neither can claim exemption based on the location of their registered office.
The Provider/Deployer Distinction Matters Enormously
Most organizations that touch AI are simultaneously providers and deployers; and the distinction determines which obligations apply to which systems.
Provider obligations are the most demanding. If you are building or modifying an AI system, you are its provider. Substantial modification of a third-party AI system (changing its intended purpose, retraining it on new data, integrating it so fundamentally that it becomes a different product) can make you a provider of what was originally someone else’s system. The provider is responsible for conformity assessment, technical documentation, post-market monitoring, and (for high-risk systems) the full set of Articles 9–17 requirements.
Deployer obligations apply to organizations using AI systems under their authority. If you are using a vendor’s AI system in your operations, you are a deployer - not a provider - unless you have substantially modified it. Deployer obligations include ensuring human oversight is assigned to competent persons, monitoring the system in line with instructions for use, retaining logs for at least six months, and (for high-risk systems affecting fundamental rights) conducting a Fundamental Rights Impact Assessment before first deployment.
The practical problem: many organizations have not catalogued which of their AI systems make them a provider versus a deployer, and they have not recognized that the same system can generate both roles simultaneously if they are both using it and reselling or modifying it.
The Enforcement Timeline Is Already Running
The enforcement timeline for the EU AI Act is not a future concern. Parts of it are already in force.
Since 2 February 2025, two obligations have been binding on all providers and deployers; not just high-risk system providers, not just EU-established organizations, but all providers and all deployers of AI systems anywhere in the world who are covered by the Act’s extraterritorial scope.
Article 5 prohibited practices became enforceable on that date. Eight categories of AI system are categorically prohibited in the EU market : systems that use subliminal manipulation, exploit vulnerabilities, implement social scoring, perform criminal risk profiling beyond specific investigation scope, use emotion recognition in workplaces and educational institutions, categorize individuals by sensitive biometric attributes, conduct real-time biometric identification in public spaces for law enforcement (with narrow exceptions), or manipulate human behavior through nudging techniques that override autonomous decision-making and cause significant harm.
Article 4 AI literacy also became enforceable on 2 February 2025. Every provider and every deployer must ensure that staff working with AI systems have sufficient AI literacy to understand what the system does, what its limitations are, how to oversee it, and what incidents to report. This obligation has no size threshold. It applies to every organization covered by the Act, regardless of headcount.
The enforcement authorities for high-risk system obligations become fully operational on 2 August 2026. The GPAI model obligations (Articles 53–55) became enforceable for newly placed models on 2 August 2025. The GPAI Code of Practice was published in July 2025 and endorsed by the Commission in August 2025 : providers signing it receive a presumption of compliance, which is effectively a mitigation factor in any enforcement action.
The Fine Structure
Non-compliance carries material consequences. For high-risk violations (including non-conformity with Articles 9–17) fines reach €35 million or 7% of global annual turnover, whichever is higher. For GPAI model violations (non-systemic risk): €15 million or 3% of global annual turnover. For incorrect or misleading information: €7.5 million or 1% of global annual turnover. For prohibited practice violations: the same as high-risk violations, €35 million or 7%.
The global turnover measurement is deliberate. It means the fine is calculated on worldwide revenue, not EU revenue; making the financial exposure for large non-EU organizations very significant.
What This Means Practically
The organizations that are most exposed to EU AI Act liability are not necessarily the ones headquartered in Brussels or Berlin. They are the ones building AI systems that are sold or deployed in the EU; and that includes most technology companies with international operations or customer bases.
The first step is determining whether you are in scope at all. That requires an honest examination of: whether you place AI systems on the EU market (as a provider or distributor), whether you deploy AI systems in EU operations (as a deployer), and whether any of those systems are classified as high-risk under Annex III or whether any of them involve profiling of individuals.
If you are in scope, the second step is determining which obligations apply to which systems; and that requires understanding the provider/deployer distinction at a level of detail that many organizations have not yet considered.
The window to close these gaps before enforcement accelerates in August 2026 is narrow. The organizations that are treating this as a compliance exercise rather than a governance capability building exercise will find themselves doing crisis remediation when they should be building sustainable programs.
Next: The Seven Mandatory Requirements for High-Risk AI Systems - A Technical Breakdown.
Download our free EU AI Act Applicability Assessment.


