EU AI Act 2026: Complete Guide for Businesses and Deployers

Last updated May 2026

The EU AI Act (Regulation (EU) 2024/1689) is now in force. This is not future regulation to plan for — parts of it are already being enforced, and the most consequential obligations for businesses deploying AI agents are arriving in 2026 and 2027. If your organization operates in the EU, sells to EU customers, or processes data about EU individuals with AI systems, this regulation applies to you.

This guide explains the regulation accurately, without the usual vendor spin. Timeline, risk tiers, GPAI obligations, deployer vs. provider distinctions, and a 24-question procurement checklist — all based on the regulatory text. The EUR-Lex source is cited throughout: Regulation (EU) 2024/1689.

We also explain how Knowlee bakes the relevant compliance structure into every job in its registry — not as a legal opinion, but as a description of a software architecture that makes the required audit trails tractable.

The timeline: what is enforced when

The EU AI Act entered into force on 1 August 2024. Application is phased:

February 2025 — Prohibited uses enforced. Chapter II of the regulation (Articles 5) lists AI practices that are entirely prohibited. These include AI systems that use subliminal manipulation to impair autonomous decision-making in harmful ways, AI-powered social scoring by public authorities, real-time remote biometric identification in publicly accessible spaces (with narrow law-enforcement exceptions), and AI systems that exploit vulnerabilities of specific groups. Organizations using AI systems that fall into these categories were required to cease by 2 February 2025.

2 August 2025 — Governance bodies and GPAI model registration. The AI Office and national competent authorities were required to be operational. General-purpose AI model providers above the 10^25 FLOPs training-compute threshold were required to notify the AI Office.

2 August 2026 — GPAI full obligations. All general-purpose AI (GPAI) model providers must comply with the full GPAI requirements: technical documentation, copyright-compliance policy, summary of training data, and (for systemic-risk models) adversarial testing and incident reporting. Deployers using GPAI models in products and services must ensure their own documentation references the upstream provider's GPAI compliance.

2 August 2027 — High-risk AI full enforcement. Title III, Chapter 2 of the regulation covers high-risk AI systems (Annex III categories). From this date, full deployer obligations apply: risk management systems, data governance, technical documentation, transparency, human oversight, accuracy and robustness, and registration in the EU AI Act database. The grace period for existing high-risk systems is this date.

Ongoing — Market surveillance. National market surveillance authorities enforce across all categories. The AI Office handles GPAI models. Penalties are up to 35 million EUR or 7% of global annual turnover for prohibited-use violations; up to 15 million EUR or 3% for other violations; up to 7.5 million EUR or 1.5% for incorrect information to authorities.

Sources: EUR-Lex Regulation 2024/1689; European Commission AI Act implementation page, accessed May 2026.

Risk tiers: what applies to your AI system

The regulation classifies AI systems into four tiers.

Unacceptable risk — prohibited (Article 5)

AI systems listed in Article 5 are banned outright. The categories above (subliminal manipulation, social scoring, real-time biometric ID in public spaces) apply here. If your system uses AI for any of these purposes, it must be discontinued. This is not a compliance challenge — it is a prohibition.

High risk — full obligations (Annex III)

High-risk AI systems are defined in Annex III of the regulation. The categories include:

  • Biometric systems for identification or categorization
  • AI in critical infrastructure (energy, water, transport, ICT)
  • AI for education and vocational training (assessment, admission)
  • AI in employment, worker management, and access to self-employment (recruitment, performance evaluation, task allocation)
  • AI for access to essential private and public services and benefits (credit scoring, insurance, emergency services dispatch)
  • AI in law enforcement (risk profiling, evidence evaluation, predicting recidivism)
  • AI in migration and asylum (assessment, document authentication)
  • AI in administration of justice and democratic processes

For deployers of high-risk AI: you must implement risk management throughout the system lifecycle, ensure data governance (training data quality, representativeness, freedom from bias), maintain technical documentation, ensure human oversight mechanisms are implemented and used, achieve accuracy and robustness standards, and register the system in the EU database before deployment.

The full obligations apply from 2 August 2027. However, organizations building systems that will be in production on that date should be implementing the compliance architecture now — not as a 2027 project.

Limited risk — transparency obligations (Articles 50-52)

AI systems that interact with humans (chatbots, emotion recognition systems, deepfakes) must disclose that users are interacting with an AI. Generative AI systems must label AI-generated content. This is already in force.

Minimal risk — voluntary codes of practice

The majority of AI systems fall here — spam filters, recommendation systems, most enterprise productivity tools. No mandatory obligations, but voluntary codes of practice are encouraged.

General-purpose AI (GPAI) — a cross-cutting category

The GPAI category (Title VIII) applies to providers of foundation models — large models trained on broad datasets for a wide range of tasks (GPT-4 class, Gemini, Llama, Claude, Mistral, etc.). This is the category that affects virtually every AI application builder, because they deploy GPAI-based systems.

GPAI provider obligations (from 2 August 2026):

  • Technical documentation of the model's capabilities and limitations
  • Copyright-compliance policy and summary of training data content
  • Transparency information for downstream deployers
  • For systemic-risk models (>10^25 FLOPs): adversarial testing, incident reporting to the AI Office, and cybersecurity measures

Deployer obligations when using GPAI: Deployers who build products on top of GPAI models are responsible for the downstream application. The GPAI provider's compliance upstream does not absolve the deployer of their own obligations for how the model is used. If you build a high-risk application on top of a GPAI model, you carry the high-risk obligations.

Provider vs. deployer: who is responsible for what

The regulation distinguishes providers (who develop or place AI systems on the market) from deployers (who use AI systems in their own processes or in the context of professional activities). For most enterprise AI deployments using off-the-shelf models and SaaS platforms, the enterprise is the deployer.

Key deployer obligations for high-risk systems:

  1. Use the system in accordance with the provider's instructions
  2. Designate a natural person to oversee the high-risk system
  3. Maintain logs for at least six months (Article 26)
  4. Implement human oversight as technically feasible and appropriate to the risk
  5. Inform and train staff on AI system capabilities and limitations
  6. Report serious incidents to the market surveillance authority

Deployer obligations do not go away because you use a vendor platform. If you use Salesforce Agentforce to make credit-adjacent decisions, you are the deployer and the obligations apply to you, not to Salesforce.

How Knowlee's job registry maps to deployer obligations

Knowlee's jobs registry is not marketed as a compliance tool. It is the operational registry for every agent job in the fleet. By design, its data model happens to contain the fields that EU AI Act deployer obligations require organizations to be able to produce.

Every job in state/jobs.json carries:

  • risk_level — maps to the risk tier classification obligation. Values include low, medium, high. High-risk jobs trigger human-oversight requirements.
  • data_categories — maps to the data governance obligation. Declares what categories of personal data (if any) the job processes, enabling the data governance documentation requirement.
  • human_oversight_required — a boolean flag that maps to Article 26's human oversight requirement. When true, the audit layer surfaces any run executed without recorded human approval.
  • approved_by and approved_at — the approval audit trail. Every run of a high-oversight job records who authorized it and when. This is the traceable approval record an auditor requests.
  • Execution logs — every run produces a structured log in state/jobs/logs/ with exit code, duration, and reasoning steps. The six-month log retention requirement is met at the filesystem level.

This architecture does not constitute legal compliance certification. It creates the operational infrastructure that makes compliance tractable — rather than requiring custom instrumentation after the fact, the fields are in the data model from job creation.

24-question procurement checklist for AI buyers

Use these questions when evaluating any AI platform under the EU AI Act framework. This checklist is suitable for IT procurement, legal review, and DPO assessment.

Risk tier identification

  1. Which Annex III categories does this system's use case fall under?
  2. Does the vendor provide a written risk classification for their system as deployed in our use case?
  3. Has the vendor's system been registered in the EU AI Act database (required for high-risk systems from August 2027)?
  4. Does the vendor maintain technical documentation for the system meeting Article 11 requirements?

Data governance 5. What training data categories were used, and is this documented? 6. Does the vendor provide training data bias mitigation documentation? 7. What personal data categories does the system process in our deployment? 8. Where is our data hosted, and does hosting meet our data-residency requirements? 9. Who is the data controller for data processed by this system?

Human oversight 10. Does the system support a designated human oversight role as required by Article 26? 11. Can human operators halt, override, or intervene in the system at any point? 12. What training does the vendor provide for staff responsible for human oversight? 13. Is human oversight practically feasible for the volume of decisions this system makes?

Audit trail and logging 14. What logs does the system produce, and at what granularity? 15. Are logs retained for at least six months? 16. Are logs tamper-evident and accessible to the designated oversight role? 17. Can logs be exported in a format suitable for regulatory audit?

GPAI and upstream compliance 18. Which foundation model(s) does this system use? 19. Has the foundation model provider complied with GPAI obligations (from 2 August 2026)? 20. Does the vendor provide documentation of the GPAI model's capabilities and limitations?

Incident reporting 21. Does the vendor have an incident reporting process for serious incidents? 22. What is the vendor's SLA for notifying deployers of incidents affecting our deployment? 23. How do we report serious incidents to our national market surveillance authority?

Contractual 24. Do our contracts with this vendor assign EU AI Act compliance responsibilities correctly between provider and deployer?

Per-vendor-type analysis

Foundation model providers (OpenAI, Anthropic, Mistral, Aleph Alpha, etc.): GPAI obligations apply from 2 August 2026. Model providers must publish technical documentation, a copyright-compliance policy, and training data summaries. Systemic-risk providers must additionally conduct adversarial testing and report incidents to the AI Office. See /glossary/sovereign-ai for the data-residency dimension.

Agent platforms (Knowlee, Salesforce Agentforce, Microsoft Copilot Studio, etc.): As deployers of GPAI-based systems, platforms carry deployer obligations for how they use upstream models. As providers of systems placed on the market, they additionally carry provider obligations for the agent platforms themselves. The double layer is not optional — buyers should request documentation for both.

Enterprise deployers (any organization running AI agents in production): Deployer obligations apply from now for prohibited-use, from August 2026 for GPAI-dependent systems, and from August 2027 for high-risk systems. The six-month log retention, human oversight designation, and incident reporting obligations are not future work — they apply on your deployment date.

EU AI Act and ISO 42001 alignment

ISO 42001 (AI Management System Standard) was published in December 2023. It provides a management system framework for responsible AI that aligns closely with the EU AI Act's process requirements: risk assessment, data governance, human oversight, incident management, and continual improvement. For organizations that have or are pursuing ISO 42001 certification, the EU AI Act compliance documentation often overlaps significantly.

Key alignment points: ISO 42001 Clause 6 (risk treatment) maps to EU AI Act Title III risk management. ISO 42001 Clause 8 (operations) maps to data governance and human oversight requirements. ISO 42001 Clause 10 (improvement) maps to incident reporting and corrective action. See /glossary/iso-42001 for the full alignment guide.

Frequently asked questions

Does the EU AI Act apply to companies outside the EU? Yes, if the system is used in the EU or produces outputs that affect persons in the EU. The regulation has extraterritorial scope comparable to GDPR. A US company whose AI system is used by EU employees or customers must comply.

What is the penalty for prohibited-use violations? Up to 35 million EUR or 7% of total worldwide annual turnover, whichever is higher. This is the maximum; actual penalties are set by national market surveillance authorities based on the specific violation and context.

Are general chatbots affected by the GPAI obligations? GPAI obligations apply to model providers, not to businesses that use AI chatbots as off-the-shelf tools. If your organization builds and deploys a chatbot product to external users, you are a deployer (and potentially a provider). If you use an internal AI assistant, you are primarily a deployer.

What is "human oversight" in practice? Article 14 of the regulation defines human oversight as the ability for natural persons to understand, monitor, and intervene in AI system operation. In practice: a designated human who can stop or override the system, who is trained on its capabilities and limitations, and who performs periodic checks on its outputs. "Human in the loop" is not sufficient if the human has no ability to intervene — "human on the loop" (monitor and override) is the baseline.

Does Knowlee's architecture certify EU AI Act compliance? No. Knowlee's job registry carries the fields that the regulation requires deployers to be able to produce, and generates the audit logs required by Article 26. Whether a specific deployment is compliant depends on the use case, risk tier, human oversight implementation, and documentation completeness. Knowlee makes compliance tractable; it does not certify it.

When do high-risk AI obligations actually start? For new systems placed on the market after 2 August 2027: on that date. For existing systems already on the market before that date: there is a transition period (generally 36 months from August 2024 = August 2027). In practice, regulated enterprises under DORA, NIS2, or sector-specific frameworks are building compliance architecture now rather than waiting.

Related reading