AI Act Buyers Checklist 2026: 24 Questions for Every Agentic AI Vendor
Last updated May 2026
The EU AI Act (Regulation 2024/1689) is not future compliance work. The prohibited-use provisions entered into force in February 2025. General-purpose AI model obligations apply from 2 August 2026. High-risk system provisions (Articles 9–15) are already enforceable for new deployments. The window for treating compliance as a bolt-on retrofit has closed.
For procurement teams, this changes the vendor evaluation process. Buying an agentic AI platform in 2026 without checking the AI Act posture is like buying an HR system without checking GDPR readiness in 2018 — it creates liability that will surface at the worst possible time.
This checklist gives procurement leads, legal teams, and CISOs 24 concrete questions to ask any agentic AI vendor. It is organized into four groups of six: governance fields, data and classification, audit trail and retention, and human oversight and portability. It closes with a comparison matrix across the most commonly shortlisted platforms.
Conflict of interest disclosure. Knowlee publishes this checklist. We have designed the questions to reflect genuine AI Act obligations — not to favor our product. Where other vendors answer better than Knowlee on specific questions, we say so. Buyers should verify every vendor's answers against publicly available documentation before contract signature.
Group 1: Governance fields per agent run (questions 1–6)
These six questions establish whether the platform's data model captures the fields auditors will request when reviewing compliance with AI Act Articles 9 (risk management), 13 (transparency), and 16 (registration obligations).
Q1. Does the platform record a risk classification for every agent run or every registered automation?
Vendors should be able to show a data field — not a marketing claim — that contains a risk level (e.g., minimal, limited, high, unacceptable) mapped to the AI Act's four-tier classification. If the answer is "you can tag it yourself in a free-text field," the governance is not structural.
Q2. Does the platform record the data categories processed by each agent or automation?
Article 10 of the AI Act requires high-risk AI systems to use data that is subject to quality and provenance controls. The platform should track whether personal data, sensitive categories (Article 9 GDPR), or special-category AI Act data are processed in each run.
Q3. Is the human oversight requirement recorded per automation, not just per deployment?
Article 14 of the AI Act requires human oversight capability for high-risk systems. The platform should record, for each registered automation, whether human oversight is required before execution, at specific decision points, or post-execution. A blanket "human in the loop option" is insufficient if it is not tied to specific agents.
Q4. Does the platform record who approved each automation and when?
Article 16(d) requires documentation of significant changes and approvals. The platform should store approved_by (identity) and approved_at (timestamp) for each registered automation, and surface any run of an unapproved automation as an incident.
Q5. Are governance fields editable only by authorized roles, with change history?
Governance fields that can be changed by any user without audit trail are not governance fields. They are editable metadata. Verify that the platform enforces role-based access control on governance fields and that changes are timestamped.
Q6. Can the platform export governance metadata in a machine-readable format for external audit?
Auditors will want to ingest governance data into their own tooling. The platform should support JSON or CSV export of the governance registry — ideally via API.
Group 2: Data residency, sub-processors, and AI Act classification (questions 7–12)
Q7. Where is data processed and stored? Is this contractually guaranteed?
"EU region available" is not a contractual guarantee. The contract should specify the data processing location. For buyers under DORA, NIS2, or sector-specific rules, the contract should also specify that the location cannot change without prior notification and consent.
Q8. Who are the sub-processors? Is the full list available?
Article 28 GDPR requires a list of sub-processors. For agentic platforms that call external APIs, ingest third-party data, or use managed model providers, the sub-processor chain matters. Ask for the full list, not a summary.
Q9. Has the vendor published a conformity assessment or self-assessment for AI Act risk tier?
For high-risk AI systems, Article 43 requires a conformity assessment. For limited and minimal risk systems, self-assessment documentation is best practice. Ask what the vendor has produced and request the document.
Q10. Does the vendor's GPAI model supplier provide a model card or transparency documentation as required by Article 53?
If the platform uses a third-party general-purpose AI model (GPT-4, Claude, Gemini, Command R), that model supplier has transparency obligations under Article 53 of the AI Act from 2 August 2026. Ask whether the vendor can confirm their model supplier is compliant.
Q11. How does the platform handle data subjects' rights (access, erasure, portability) for data processed by agents?
If an agent processes personal data and a data subject exercises their Article 17 GDPR erasure right, how does the platform propagate that erasure through agent outputs, logs, and memory? This is more complex for platforms with persistent cross-agent memory.
Q12. Does the platform support data minimization — limiting data access by each agent to what is necessary for its specific task?
Article 10 AI Act and Article 5(1)(c) GDPR both require data minimization. An agentic platform should allow scoping data access per agent, not giving every agent access to the full data estate.
Group 3: Audit trail format and retention (questions 13–18)
Q13. Is there a per-run audit log for every agent execution, with input, output, and intermediate steps?
Article 12 of the AI Act requires record-keeping for high-risk systems. The audit log should capture what data the agent received, what tools it called, what outputs it produced, and the reasoning steps (if applicable). A "job ran successfully" log is not an audit trail.
Q14. How long are audit logs retained, and who can access them?
The AI Act does not specify a minimum retention period for all system types, but Article 12(1) requires logs to be kept for at least six months for high-risk systems (longer for specific categories). Ask what the platform's default retention is and whether it is configurable.
Q15. Are audit logs stored separately from operational data and protected against modification?
Audit logs that can be modified by the same user who ran the agent are not audit logs. Verify that logs are append-only and that modification requires elevated privilege with its own audit event.
Q16. Can audit logs be exported to SIEM or compliance tooling (Splunk, Elastic, Azure Sentinel)?
In-platform audit views are insufficient for enterprise compliance programs. The platform should support log export to your SIEM in a standard format (JSON, CEF, LEEF).
Q17. Does the platform log model version and prompt template for each agent run?
If the model or prompt changes mid-deployment, the audit trail should reflect which version produced which output. This is essential for incident investigation and for reproducibility in regulatory review.
Q18. Is there a documented incident-response procedure for cases where an agent run produces a harmful or non-compliant output?
Ask the vendor for their documented escalation path. Who gets notified? What is the containment procedure? What is the retrospective requirement? A platform without this procedure is not production-ready for regulated use.
Group 4: Human oversight workflow and portability (questions 19–24)
Q19. Does the platform provide a human approval gate that can be required before a flagged agent run executes?
Article 14 of the AI Act requires "effective oversight" for high-risk systems. This means the ability to stop or redirect an agent before it acts. Ask whether the platform supports a mandatory approval gate (not just a notification) for designated high-risk automations.
Q20. Can a human pause, redirect, or terminate an in-flight agent run without losing the audit record?
Stopping an agent should not erase the run record. The platform should support graceful interruption with a full log of what happened before the stop.
Q21. Is there a decision console or review queue where flagged agent outputs wait for human approval before downstream action?
This is distinct from a pre-execution gate (Q19). Some agent outputs — a drafted contract, a generated communication, a financial forecast — should require human sign-off before being acted upon downstream. Does the platform support this at the output level?
Q22. What is the exit procedure if the buyer terminates the contract? Is data portable?
DORA Article 28 and the AI Act's transparency obligations require vendors to support exit without data loss. Ask for the documented exit procedure: format, timeline, and cost of data export. "Contact sales" is not an answer.
Q23. Can the buyer bring their own model? Can the platform run without a proprietary model provider?
Model provider lock-in is a portability risk. If the platform only works with one model provider, and that provider changes pricing, terms, or availability, the buyer's compliance posture changes. Verify whether the platform supports pluggable model backends.
Q24. Does the platform provide a test or sandbox environment where compliance scenarios can be validated before production deployment?
Regulated buyers need to validate governance fields, audit trails, and oversight workflows before going live. A platform that does not offer a testable sandbox is harder to validate for compliance purposes.
Vendor comparison matrix
The matrix below scores eight platforms against the 24 questions, grouped by the four question blocks. Scoring: Y = Yes, documented and available; P = Partial or via configuration; N = No or not publicly disclosed; ND = Not disclosed.
| Platform | Q1–6 (Governance) | Q7–12 (Data/Classification) | Q13–18 (Audit trail) | Q19–24 (Oversight/Portability) |
|---|---|---|---|---|
| Knowlee | Y: native risk_level, data_categories, human_oversight_required, approved_by, approved_at fields | Y: EU legal entity, self-hosted option, configurable sub-processors | Y: per-run structured logs, state/jobs/logs/ directory, configurable retention | Y: approval gate, decision console, model-agnostic, portable artifacts |
| Salesforce Agentforce | P: Salesforce trust layer; risk classification requires custom configuration | P: EU Hyperforce regions available; sub-processors: Salesforce list | P: Salesforce audit trail via Event Monitoring add-on | P: human oversight via flows; exit portability is Salesforce-standard |
| Microsoft Copilot Studio + Agent Framework | P: Purview compliance center integration; risk fields require Purview configuration | Y: EU Azure regions; Purview data classification | P: Purview audit via Microsoft Compliance Center | P: human approval via Power Automate; portability via Microsoft data export |
| Aleph Alpha PhariaAI | P: compliance tooling in progress; risk fields not standardized in platform | Y: German legal entity; EU-resident; no CLOUD Act | P: per-run logging documented; SIEM export not publicly confirmed | P: human oversight capability; exit terms on request |
| Mistral | P: governance metadata not a first-class product feature | P: French legal entity; EU data processing; sub-processors on request | P: basic run logging; structured audit trail not confirmed | P: human oversight via customer-built wrapper; model portability yes |
| Dust | P: governance registry not native; workflow metadata available | P: French legal entity; EU hosting | P: workflow history available; structured audit trail not confirmed | P: human approval steps configurable; exit terms on request |
| n8n | N: governance fields require custom workflow implementation | P: EU legal entity (Germany); self-hosted option available | P: execution logs per workflow; SIEM export via webhook | P: pause/stop available; model-agnostic; data fully portable (self-hosted) |
| CrewAI Enterprise | P: compliance fields not native; available via custom configuration | P: US entity; self-hosted option available | P: observability tooling for run history; structured audit trail not confirmed | P: human-in-loop capability; model-agnostic; exit portability via self-host |
How to use this matrix. "Y" means the capability is documented and available as a first-class platform feature. "P" means it is achievable but requires configuration, add-ons, or implementation work. "N" means it is not available. Buyers should verify every "Y" and "P" against current vendor documentation before relying on this matrix in a procurement decision. Vendor capabilities evolve; this reflects May 2026.
The Knowlee advantage explained. Governance fields are first-class data model entries in Knowlee's jobs registry — not CRM tags, not dashboard widgets. When an auditor asks "show me every run last quarter where human oversight was required, who approved it, and when," the answer is a structured JSON export from the registry. Other platforms can produce equivalent data with custom configuration work; Knowlee ships it by default.
How to use this checklist in practice
- Send questions 1–24 to each shortlisted vendor as a written request for information. Require written answers — verbal commitments are not auditable.
- Weight the answers by your regulatory exposure. If you are under DORA, weight questions 7, 8, 14, and 22 heavily. If you are deploying a high-risk AI system under the AI Act, weight questions 1–6 and 13–18.
- Request evidence, not claims. For each "yes" answer, ask for the documentation that proves it: a screenshot of the governance field, an example audit log export, the DPIA or conformity assessment document.
- Test in a sandbox before committing. Use questions 20–21 to validate the oversight workflow end-to-end before any production deployment.
For the regulatory text underlying these questions, see AI Act compliance for agentic platforms 2026. For sovereign deployment considerations, see sovereign agentic AI platforms 2026. For self-hosting considerations, see self-hosted AI agent platforms 2026.
Frequently asked questions
Is this checklist legally sufficient for AI Act compliance? No. This checklist is a procurement evaluation tool. AI Act compliance requires a conformity assessment (for high-risk systems), ongoing risk management, documentation, and technical implementation — not a vendor questionnaire. Engage qualified legal and technical advisors for compliance work.
Do all agentic AI platforms need to comply with the AI Act? Platforms used in the EU for business purposes are in scope. The specific obligations depend on the risk tier of the use case. Most agentic sales, marketing, and operations automations are limited-risk. Some legal, HR, and financial decision-support use cases may qualify as high-risk under Annex III of the AI Act. Buyers should assess each use case individually.
What is the difference between AI Act compliance and ISO 42001 certification? ISO 42001 is an AI management system standard providing a process framework for responsible AI governance. It complements the AI Act but does not substitute for it. AI Act compliance is a legal obligation; ISO 42001 certification is a voluntary standard. Some vendors use ISO 42001 certification as evidence of governance maturity.
When does the AI Act become fully enforceable? Prohibited-use provisions: February 2025 (already in force). General-purpose AI obligations: 2 August 2026. High-risk system obligations: 2 August 2026 (12 August 2026 for some categories). See EUR-Lex Regulation 2024/1689 for the full timeline.
What happens if an agentic platform we procured turns out not to be AI Act compliant? Under Article 99 of the AI Act, non-compliance by providers and deployers can result in fines up to €30 million or 6% of global annual turnover (whichever is higher for prohibited practices). Deployers who use non-compliant systems knowingly may share liability. Contracts should include AI Act compliance warranties from vendors.