AI Act Compliance Checker (Free): Find Your Risk Classification in 5 Minutes

The EU AI Act (Regulation 2024/1689) is in force, and the staggered enforcement timeline that started in February 2025 with the Article 5 prohibitions is already binding. By August 2026, the high-risk regime under Article 6 and Annex III becomes fully applicable to most categories. By August 2027, it covers the remaining safety-critical product categories listed in Annex I.

Most operators we talk to are not late on the substance — they are late on the classification step. Until you know whether your AI system is prohibited, high-risk, limited-risk, or minimal-risk, you cannot scope a compliance programme, write a budget, or assign an owner. And until you know which Articles of Capo III actually bind you, you are guessing.

This is what the AI Act Compliance Checker is for: a 7-question wizard that returns a preliminary classification under Article 6 and Annex III, plus the article-by-article obligation checklist for the class you land in. Honest, encoded, auditable. Not legal advice — a screening that gets you from "unknown" to "structured first draft" in under five minutes.

Try the tool now →


Why a 7-question checker (and not more)

A full conformity assessment under Article 43 takes weeks. A fundamental rights impact assessment under Article 27 takes a structured stakeholder workshop. A quality management system under Article 17 takes months to stand up. None of that fits in a free web tool, and we do not pretend otherwise.

What does fit in a 5-minute screening is the classification gate. The EU AI Act is layered, and once you know which layer you are in, the rest of the work has clear shape:

  • Article 5 — prohibited practices. Narrow list. If you are here, you stop.
  • Article 6 + Annex III — high-risk. The meatiest set of obligations, but a finite list of categories.
  • Article 50 — limited-risk transparency. Mostly applies to chatbots, voice agents, and synthetic content (deepfakes).
  • Residual — minimal-risk. No mandated obligations beyond voluntary codes of conduct under Article 95.

The 7 questions in our tool are the minimum signal needed to route into one of these four buckets with confidence. More questions wouldn't make the screening more accurate — they would make it slower and discourage completion.

How the 7 questions map to the regulation

Each question targets a specific Article. The result is auditable: you can see why you landed where you did.

Q1 — System category. This is the central Annex III question. The answer maps directly to one of the eight high-risk categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) plus the Article 5 prohibited subset (real-time biometric identification in public spaces) and the Article 51 GPAI overlay.

Q2 — Domain of use. The EU AI Act applies to providers and deployers placing AI systems on the EU market or affecting EU users — even if the operator is non-EU. Q2 captures whether the regulation has territorial reach over your deployment at all.

Q3 — Decision impact. Annex III high-risk classification is gated by consequential decisions affecting individuals. A system that surfaces information for a human decision-maker is treated differently from one that auto-decides hiring or credit. Q3 + Q1 together drive the high-risk vs limited-risk routing.

Q4 — Subject interaction. Article 50 transparency obligations attach to user-facing AI: chatbots, voice agents, virtual assistants, and synthetic content. Q4 captures that surface.

Q5 — Foundation model component. Training, fine-tuning, or merely calling a general-purpose AI (GPAI) model carries different Article 51 obligations. Q5 layers GPAI duties on top of the base classification.

Q6 — Deployment status. The enforcement timeline matters. A live-EU deployment in 2026 has different urgency than a 2028 plan. Q6 informs the remediation roadmap that lands in your PDF.

Q7 — Operator size. SMEs receive proportionate penalties under Article 99. Public authorities have a mandatory FRIA under Article 27. Large enterprises rarely get either accommodation. Q7 personalises the checklist.

Risk classes explained

Prohibited (Article 5)

Narrow list: real-time remote biometric identification in public spaces (with carve-outs for serious crime under Member State authorisation), social scoring by public authorities, manipulative or exploitative systems, untargeted scraping of facial images from the internet, emotion recognition in workplaces and education (with research exemptions), biometric categorisation by sensitive characteristics, and predictive policing based solely on profiling.

If your system lands here, the practice cannot be placed on the EU market. There is no compliance pathway — only redesign or geo-fencing out of the EU.

High-risk (Article 6 + Annex III)

This is the obligation-heavy class. Annex III enumerates eight categories — biometrics, critical infrastructure, education, employment & HR, essential services, law enforcement, migration, justice & democracy — and within each, a non-exhaustive list of system descriptions.

If your system fits and makes consequential decisions about individuals, you inherit the full Section 2 / Section 3 obligation set: risk management (Art. 9), data governance (Art. 10), technical documentation (Art. 11), logging (Art. 12), transparency to deployers (Art. 13), human oversight (Art. 14), accuracy and cybersecurity (Art. 15), provider obligations (Art. 16), quality management (Art. 17), conformity assessment (Art. 43), CE marking and declaration of conformity (Arts. 47-48), and database registration (Art. 49). Public-sector deployers (and certain private deployers in essential services) additionally owe a fundamental rights impact assessment (Art. 27).

This is months of work, and it should be sequenced with legal counsel. Our checker tells you that you are here. The PDF report breaks down what each Article practically requires for an organisation of your size.

Limited-risk (Article 50)

Most operators with chatbots, voice agents, virtual assistants, or generative-content products land here. Article 50 obligations are real but narrowly scoped:

  • Inform users they are interacting with AI (unless obvious from context).
  • Mark AI-generated audio, image, video, and text content in a machine-readable way (Article 50(2)).
  • Disclose deepfake content as artificially generated or manipulated (Article 50(4)).

There is no conformity assessment, no CE marking, no database registration. The biggest risk here is drift into Annex III — for example, a customer-service chatbot that crosses into automated benefits-eligibility decisions becomes a high-risk system, and the obligation set changes.

Minimal-risk

Most internal-productivity AI lands here: spam filters, recommendation engines for non-consequential decisions, AI-assisted writing tools used internally, and the like. No mandated obligations. Article 95 encourages voluntary code-of-conduct adherence — useful for procurement signalling and AI literacy obligations under Article 4 — but nothing the regulation forces.

Who needs this checker

SMEs (under 250 staff, under €50M revenue) are the cohort most likely to land in high-risk by accident — typically through an off-the-shelf hiring tool or credit-decisioning component bolted into a wider application. The checker gives an SME founder or general counsel a defensible classification document in five minutes, which is often the trigger to schedule the next conversation with counsel.

Corporate enterprises with mature compliance functions usually have a classification view already, but the checker is useful as a sanity-check for individual systems inside a larger AI inventory. The "evidence" panel in the result — which surfaces why the classification was reached — is structured exactly as it would appear in an audit binder, which is the most common request from internal audit teams reviewing first-line classification work.

Public authorities and government bodies have the strictest regime. Article 27 makes the fundamental rights impact assessment mandatory before first use of any high-risk system, and several Member States have layered national requirements on top. Public-sector users typically run the checker as the first step of a broader procurement-time assessment.

Auditors, consultants, and law firms can embed the checker on their own sites via a one-line iframe (?embed=1 strips the Knowlee chrome). Several auditing partners use this to scope the first conversation with a new client — five minutes of self-service replaces the first 30 minutes of any qualifying call.

Frequently asked questions

Is this legal advice?

No. The checker provides a first-pass classification under the EU AI Act 2024/1689 and is explicitly framed as a screening tool. The disclaimer banner appears at every step. For any actual deployment, you must consult counsel — ideally one familiar with both EU AI Act and your Member State's national implementation.

What does the checker not do?

Three things, listed transparently in the result screen:

  1. It does not run a full Fundamental Rights Impact Assessment under Article 27 — that requires structured stakeholder analysis with affected parties.
  2. It does not analyse your specific dataset, model behaviour, or technical architecture — it works only from the seven structured inputs you provide.
  3. It does not replace formal conformity assessment under Article 43 — that is a notified-body or internal-control procedure depending on the Annex III category.

How is the classification logic encoded?

The classification function is a deterministic TypeScript function exposed in the source. Article 5 prohibitions (Q1-driven), Article 6 + Annex III high-risk (Q1 × Q3), Article 50 transparency (Q4-driven), and minimal-risk (residual) — in that priority order. The function is unit-testable; we ship inline test cases that any auditor can verify. There is no LLM involved in the classification step. The same answers always produce the same result.

Can I see the answers behind the result?

Yes. The result screen shows an "evidence" panel that surfaces which question answer triggered which classification — for example, "Your Q1 (employment-hr) combined with Q3 (consequential) triggers high-risk classification under Annex III §4." This is intended to be lifted into your audit documentation as-is.

How does this differ from compliance suites like Vanta, OneTrust, or Drata?

Those are continuous compliance management platforms — they track controls, evidence, and audit cycles across multiple frameworks (SOC 2, ISO 27001, GDPR, AI Act, etc.). This checker is a classification entry point: it tells you what you are, before you start tracking it. The two compose well — once you know your AI system is high-risk, you can populate the corresponding control set in any of those platforms with confidence. We are not competing.

Where does the data go?

The seven answers stay in your browser unless you submit the lead-capture form to receive the PDF report. If you do submit, the answers and the resulting classification are stored in our Supabase backend behind RLS, used to generate your PDF, and tied to your email for the (occasional) follow-up. Privacy policy and unsubscribe link in every email. The classification logic itself runs entirely client-side — no LLM call, no server round-trip required to see your result.


Try the tool

Open the AI Act Compliance Checker →

Five minutes. Seven questions. Your risk classification under Article 6 and Annex III, plus the Article-by-Article obligation checklist, with the evidence behind every line.

If you want to go deeper after you see your classification, the EU AI Act business guide is the next read, and the AI Compliance Checklist 2026 is the implementation companion. For internal-controls maturity, the 25-question AI Act Readiness Assessment is the next step up.