EU AI Act Compliance Checker: Is Your AI System High-Risk?
The EU AI Act is now in force. If your business develops, deploys, or uses AI systems within the European Union — or offers AI systems to EU customers — you need to understand whether your systems are classified as prohibited, high-risk, limited-risk, or minimal-risk. The classification determines your compliance obligations, which range from a simple transparency notice to mandatory conformity assessments, technical documentation, human oversight requirements, and registration in the EU AI Act database.
This checker walks you through the decision tree used by the EU AI Act's Annex III classification framework. It is not a substitute for legal advice — but it will tell you with reasonable confidence where your system sits and what obligations follow.
Before You Begin: Key Definitions
AI system (as defined by the EU AI Act): A machine-based system designed to operate with varying levels of autonomy that, for explicit or implicit objectives, infers from the inputs it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence real or virtual environments.
Provider: Any company or individual that develops an AI system or general-purpose AI model and places it on the market or puts it into service under its own name or trademark.
Deployer: Any company or individual that uses an AI system under its own authority — including using AI tools provided by a third party in a professional context.
Both providers and deployers have obligations under the AI Act. The obligations differ by role.
Step 1: Is Your AI System Prohibited?
Work through each question. If you answer YES to any of them, your AI system is in a prohibited category and cannot be used in the EU.
1.1 Does the system deploy subliminal manipulation techniques below the threshold of conscious awareness that materially distort a person's behavior in a way that causes or is likely to cause harm?
1.2 Does the system exploit specific vulnerabilities of natural persons (age, disability, social or economic situation) to distort their behavior in a way that causes or is likely to cause harm?
1.3 Is the system used by public authorities for social scoring of natural persons based on their social behavior or personal characteristics?
1.4 Is the system used for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes? (Note: narrow exceptions apply for specific serious crime investigations.)
1.5 Does the system make inferences about emotions of natural persons in the context of workplace or educational settings?
1.6 Does the system create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage?
1.7 Does the system perform biometric categorization of natural persons based on sensitive characteristics (political views, religious beliefs, sexual orientation, race) to deduce those characteristics from biometric data?
If NO to all: proceed to Step 2.
If YES to any: the system falls under Article 5 prohibited AI practices. It cannot be placed on the market, put into service, or used in the EU. Legal counsel required.
Step 2: Does Your System Fall Under Annex III High-Risk Categories?
The EU AI Act defines eight high-risk sectors in Annex III. Review each category carefully.
Category A: Biometric Identification and Categorization
Does your system perform:
- Remote biometric identification of natural persons (not real-time law enforcement)?
- Biometric categorization of natural persons based on sensitive attributes?
- Emotion recognition?
If YES: High-risk classification applies. Note that some biometric systems with narrow, low-risk scope may qualify for the limited exception in Article 6(3) — see Step 3.
Category B: Critical Infrastructure
Is your system a safety component in, or does it perform safety-relevant functions in:
- Management and operation of road traffic?
- Supply of water, gas, heating, electricity?
- Digital infrastructure (internet exchanges, DNS, cloud services of critical importance)?
If YES: High-risk classification applies.
Category C: Education and Vocational Training
Does your system:
- Determine access to, or assignment to, educational institutions or vocational training programs?
- Evaluate learning outcomes or assessments that determine students' progression?
- Monitor students in ways that could detect prohibited behavior during examinations?
- Assess the appropriate level of education for individuals?
If YES: High-risk classification applies.
Practical impact for EdTech companies: AI-powered admissions tools, proctoring systems, and adaptive assessment platforms fall here. Simple content recommendation or study aid systems typically do not.
Category D: Employment, Workers Management, and Access to Self-Employment
Does your system:
- Screen, filter, or rank candidates for job applications?
- Make or support decisions about promotion, termination, or task allocation to persons in employment relationships?
- Monitor the performance of employees or contractors?
- Score or rank workers in ways that affect their working conditions?
If YES: High-risk classification applies.
This is the most commercially relevant category for B2B AI. AI recruiting tools, performance management systems, workforce analytics platforms, AI-driven task allocation, and productivity monitoring tools all fall here when they support or make decisions affecting employment conditions.
Partial exception: An AI tool that only provides analytics and reporting without influencing individual employment decisions may fall below the threshold. The key test is whether the AI output directly feeds into decisions about specific individuals.
Category E: Access to Essential Private Services and Public Services and Benefits
Does your system:
- Evaluate the creditworthiness of natural persons or establish their credit score?
- Determine access to health and life insurance and set premiums?
- Determine access to public benefits and services?
- Dispatch or prioritize emergency first response services?
If YES: High-risk classification applies.
Category F: Law Enforcement
Does your system support law enforcement activities including:
- Assessing the risk of a natural person becoming a victim of crime?
- Polygraph testing or reliability assessment of evidence?
- Predicting the occurrence or recurrence of actual or potential criminal offenses?
- Profiling in the context of detection, investigation, or prosecution of criminal offenses?
If YES: High-risk classification applies. Additional restrictions under law enforcement provisions.
Category G: Migration, Asylum, and Border Control
Does your system:
- Assist in the examination of applications for asylum, visa, or residence permits?
- Assess risks related to irregular immigration?
- Support document authenticity assessment in migration contexts?
If YES: High-risk classification applies.
Category H: Administration of Justice and Democratic Processes
Does your system:
- Assist judicial authorities in researching and interpreting facts and law?
- Support application of the law to a concrete set of facts?
- Influence the outcome of elections or referendums?
If YES: High-risk classification applies.
Step 3: The Article 6(3) Exception
Even if your system touches a category in Annex III, it may not be high-risk if:
The AI system poses no significant risk of harm to health, safety, or fundamental rights because it:
- Does not profile individuals
- Is not intended to make decisions with significant effect on individuals
- Merely performs a narrow procedural task
- Is intended to improve the result of a previously completed human activity
- Is not intended to assess risk in relation to natural persons
If all four conditions apply, you may document the exception and proceed with limited-risk obligations only. This exception requires documented justification — verbal claims are insufficient.
Risk Classification Summary
Based on your answers above, classify your system:
| Classification | Determination | Primary Obligation |
|---|---|---|
| Prohibited | YES to any Step 1 question | Cannot deploy in EU |
| High-Risk | YES to any Annex III category (Step 2) without valid exception | Full compliance regime (see below) |
| Limited-Risk | Touches limited-risk provisions (chatbots, deepfakes) but not high-risk | Transparency obligations only |
| Minimal-Risk | All Step 1 and Step 2 answers are NO | Voluntary codes of conduct |
What High-Risk Classification Means: Compliance Obligations
If your system is high-risk, these obligations apply under Chapter III of the EU AI Act:
For Providers (Developers)
| Obligation | Detail |
|---|---|
| Risk management system | Documented iterative process throughout lifecycle |
| Data governance | Training data quality, relevance, representativeness |
| Technical documentation | Must accompany every high-risk system — content defined in Annex IV |
| Record-keeping | Automatic logging of system operation where technically feasible |
| Transparency to deployers | Instructions for use; information on capabilities, limitations, foreseeable misuse |
| Human oversight | Design must allow human monitoring, intervention, override |
| Accuracy, robustness, cybersecurity | Documented performance metrics and test results |
| Conformity assessment | Self-assessment for most categories; third-party assessment for biometric and GPAI |
| EU declaration of conformity | Signed document asserting compliance |
| CE marking | Required before placing on EU market |
| Registration | Register in EU AI Act public database before market placement |
| Post-market monitoring | Ongoing monitoring plan; incident reporting |
For Deployers (Users of High-Risk AI)
| Obligation | Detail |
|---|---|
| Use in accordance with instructions | Cannot use beyond the scope documented by the provider |
| Human oversight | Assign qualified persons to monitor system operation |
| Data input relevance | Ensure input data is appropriate for the system's purpose |
| Fundamental rights impact assessment | Required before deployment for certain categories (public authority deployers) |
| Record-keeping | Logs must be kept for minimum of 6 months |
| Incident reporting | Serious incidents or near-misses must be reported to market surveillance authorities |
Compliance Timelines
| Provision | Applies from |
|---|---|
| Prohibited AI (Article 5) | February 2, 2025 |
| General-Purpose AI rules (Chapter V) | August 2, 2025 |
| High-risk systems (Annex III, Chapter III) | August 2, 2026 |
| High-risk systems (Annex I, embedded in products) | August 2, 2027 |
If your system falls under Category D (Employment and HR) or similar Annex III categories, the August 2026 deadline is your primary compliance target.
Action Items by Classification
If Prohibited
Discontinue operation of the system in the EU immediately and seek legal counsel. Document the decision and the basis for classification.
If High-Risk
Immediate (now):
- Formally document the risk classification with supporting analysis
- Assign a named internal owner for AI Act compliance
- Begin gap assessment against the 9 technical obligations above
- Map all high-risk AI systems in use (as deployer) or under development (as provider)
Within 90 days: 5. Complete technical documentation (Annex IV format) for each system 6. Implement or document risk management processes 7. Establish logging/record-keeping infrastructure 8. Draft instructions for use (providers) or verify you have received them (deployers)
Before August 2026: 9. Complete conformity assessment 10. Register system in EU AI Act database 11. Implement post-market monitoring plan 12. Train all staff involved in oversight roles
If Limited-Risk
- Implement transparency notices where required (chatbot disclosure, deepfake labeling)
- Document the basis for limited-risk classification
If Minimal-Risk
Consider voluntary adherence to EU AI Act codes of conduct — signals trustworthiness to clients and partners operating in regulated sectors.
Industry-Specific Guidance
HR and Recruiting Technology
Nearly all AI tools that evaluate, score, or rank job candidates fall under Category D. This includes:
- CV screening and ranking tools
- AI-powered video interview analysis
- Skill assessment platforms with algorithmic scoring
- Workforce analytics that affect individual employment decisions
What does NOT typically fall under Category D:
- AI tools used by recruiters to draft job descriptions
- Calendar scheduling tools
- Anonymization tools for bias reduction (these support human decision-making rather than making decisions)
Sales and CRM Technology
AI tools in sales contexts are generally minimal or limited-risk unless they:
- Assess creditworthiness (Category E)
- Are used in a financial services context with individual credit decisions
- Score individuals in ways that affect access to financial products
Lead scoring, pipeline prediction, and outbound personalization tools typically fall in the minimal-risk category when used purely for sales prospecting.
Content Generation
General-purpose AI (GPAI) models like Claude, GPT-4, and Gemini have separate obligations under Chapter V. If you are using GPAI APIs to build products, you are a downstream provider and have specific documentation and transparency obligations distinct from the high-risk framework.
FAQ
Q: If I use a third-party AI tool (like a CRM with AI features), am I responsible for compliance?
Yes, as a deployer. You are responsible for ensuring you use the AI system only within the scope documented by the provider, implementing human oversight, maintaining required logs, and reporting incidents. You are not responsible for the provider's technical compliance, but you are responsible for your own use.
Q: We are a startup with a 5-person team. Do the same obligations apply?
Yes, with one exception: micro-enterprises (fewer than 10 employees and annual turnover below €2M) are partially exempted from certain technical documentation requirements and may use simplified conformity assessment procedures. However, the risk classification itself applies equally regardless of company size.
Q: Our AI system is trained outside the EU but used by EU customers. Does the EU AI Act apply?
Yes. The EU AI Act has an extraterritorial scope similar to the GDPR. If the output of your AI system is used in the EU or if you have EU-based deployers, the Act applies to you regardless of where your company is incorporated or where your system was developed.
Q: What are the penalties for non-compliance?
Administrative fines for violations of Article 5 (prohibited practices): up to €35 million or 7% of global annual turnover, whichever is higher. Violations of high-risk obligations: up to €15 million or 3% of turnover. Supply of incorrect or misleading information to authorities: up to €7.5 million or 1.5% of turnover.
Q: Where can we register our high-risk AI system?
The EU AI Act public database is operated by the European AI Office. Registration is required before placing high-risk systems on the EU market. The database URL and registration procedure will be published by the European AI Office ahead of the August 2026 deadline.
Related Resources
- EU AI Act Business Guide
- AI Act — Glossary
- AI Compliance Automation
- AI Governance Framework
- ISO 42001 Implementation Guide
- AI Compliance — Glossary
- Responsible AI — Glossary
Need a formal EU AI Act risk classification for your AI systems? Our team provides structured classification assessments with documented legal analysis and a prioritized compliance roadmap. Book a free consultation to discuss your AI system portfolio and the fastest path to compliance.