AI Compliance 30-Minute Self-Paced Course (2026)

A practitioner-built crash course for product leads, compliance officers, and operators who need to understand AI compliance in one coffee. Six lessons, five minutes each, no fluff. Sources cited inline so you can verify every claim against the official text.

By the end, you will be able to classify an AI system under the EU AI Act, map its controls to ISO/IEC 42001 and SOC 2, draft a vendor due-diligence checklist, and prove Article 4 AI literacy obligations with an audit trail.

Time budget: 30 minutes total. Each lesson is self-contained. Skip any you already know.


Lesson 1 — AI Act Risk Classification: Walking Through Annex III

The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024. It is structured around four risk tiers, and the entire compliance posture of your system collapses into one question: which tier does your use case fall into?

The four tiers:

  1. Unacceptable risk (Article 5) — banned outright. Social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow law-enforcement exceptions), emotion recognition in workplaces and schools, untargeted scraping of facial images. Prohibitions applied from 2 February 2025.
  2. High risk (Articles 6–7, Annex III) — permitted but heavily regulated.
  3. Limited risk (Article 50) — transparency obligations only. Users must be told they are interacting with an AI system, deepfakes must be labelled.
  4. Minimal risk — no legal obligations beyond voluntary codes of conduct.

Annex III, the high-risk catalogue you must memorise:

  • Biometrics — remote biometric identification, biometric categorisation, emotion recognition (where not banned).
  • Critical infrastructure — safety components in road traffic, water, gas, electricity supply, digital infrastructure.
  • Education and vocational training — admissions, assessment of learning outcomes, monitoring student behaviour, evaluating appropriate education level.
  • Employment and worker management — recruitment (CV screening, interview scoring), promotion and termination decisions, task allocation, performance monitoring.
  • Access to essential private and public services — public benefits eligibility, credit scoring (excluding fraud detection), risk assessment in life and health insurance, emergency call dispatching.
  • Law enforcement — risk profiling, evidence reliability assessment, predictive policing.
  • Migration, asylum, border control — visa risk assessment, lie detection, document authenticity.
  • Administration of justice and democratic processes — judicial decision support, election influence systems.

The Article 6(3) exception (your most useful clause): an Annex III system is not high-risk if it performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing the human assessment, or performs preparatory tasks. You must document the assessment and register it in the EU database.

General-Purpose AI models (Articles 51–55) are a parallel track. Models trained with more than 10^25 cumulative FLOPs are presumed to carry systemic risk and must notify the AI Office, run model evaluations, track serious incidents, and ensure cybersecurity. Obligations applied from 2 August 2025.

High-risk timeline: providers of high-risk systems listed in Annex III must comply by 2 August 2026; systems regulated under existing product safety law (Annex I) get until 2 August 2027.

Practical exercise (60 seconds): for your current AI feature, write down (a) the Annex III category it touches, (b) whether Article 6(3) applies, (c) whether you are a provider or deployer, (d) the date the obligation kicks in. If you cannot answer in one sentence each, you do not yet have a defensible classification.

Next read: AI Act Compliance Software Guide for the operational layer that turns this classification into shippable controls.


Lesson 2 — ISO/IEC 42001 Management System Overview

ISO/IEC 42001:2023 ("Information technology — Artificial intelligence — Management system") is the world's first certifiable AI management system standard, published in December 2023. Where the AI Act is law, 42001 is a how — the operational backbone you build to demonstrate compliance.

The structure mirrors ISO 27001. If you have an ISMS, you already understand the shape: Plan-Do-Check-Act, clauses 4 through 10, plus a normative Annex A of controls.

Clauses 4–10 (the management system):

  • Clause 4 — Context. Identify internal and external issues, interested parties (regulators, customers, affected persons), and the scope of the AIMS.
  • Clause 5 — Leadership. Top-management commitment, an AI policy, defined roles and responsibilities including an accountable AI owner.
  • Clause 6 — Planning. AI risk assessment (Annex B methodology), AI impact assessment on individuals and groups, AI objectives.
  • Clause 7 — Support. Resources, competence, awareness, documented information.
  • Clause 8 — Operation. Operational planning and control, AI system impact assessment, AI system lifecycle.
  • Clause 9 — Performance evaluation. Monitoring, measurement, internal audit, management review.
  • Clause 10 — Improvement. Nonconformity, corrective action, continual improvement.

Annex A — 38 controls across nine objectives: policies for AI (A.2), internal organisation (A.3), resources for AI systems (A.4), assessing impacts (A.5), AI system lifecycle (A.6), data for AI systems (A.7), information for interested parties (A.8), use of AI systems (A.9), third-party and customer relationships (A.10).

Annex B is the implementation guidance — the how-to for each Annex A control. It is the fastest path to a defensible control narrative.

Annex C lists organisational AI objectives and risk sources you can adopt verbatim: fairness, accountability, transparency, security, privacy, robustness, safety, environmental impact.

Annex D maps 42001 across application domains — useful when you serve healthcare and finance from the same platform.

Why certify? ISO 42001 certification is the cheapest credible signal in enterprise procurement. It compresses six months of vendor questionnaires into one badge. The AI Act explicitly anticipates harmonised standards (Article 40); 42001 is on the inevitable shortlist of presumed conformity routes.

Effort estimate: for an organisation with mature ISO 27001, expect 3–4 months to add 42001 on top. Greenfield, plan 6–9 months.

Next read: ISO 42001 Checklist for AI Management for the clause-by-clause evidence list.


Lesson 3 — SOC 2 Trust Services Criteria + AI-Specific Evidence

SOC 2 is the dominant North-American assurance standard, governed by the AICPA's Trust Services Criteria (TSC, 2017 with 2022 points-of-focus revision). It is not a certification — it is an attestation report by a licensed CPA firm covering one or more of five categories.

The five Trust Services Criteria:

  • Security (mandatory in every SOC 2). The Common Criteria (CC1–CC9) cover control environment, communication, risk assessment, monitoring, control activities, logical and physical access, system operations, change management, risk mitigation.
  • Availability — system uptime, capacity, recovery.
  • Processing Integrity — data is processed completely, accurately, timely.
  • Confidentiality — designated confidential information is protected.
  • Privacy — personal information is collected, used, retained, disclosed, and disposed of in conformity with the entity's commitments.

Type 1 vs Type 2. Type 1 attests that controls are suitably designed at a point in time. Type 2 attests they are operating effectively over a period (typically 6–12 months). Enterprise buyers want Type 2.

Where AI shows up. SOC 2 does not have AI-specific criteria, but in 2026 every auditor will ask about AI controls under the existing criteria. The AICPA published 2024 SOC 2 Reporting on Controls Relevant to AI describing how to scope AI within the TSC.

The AI-specific evidence enterprise buyers expect inside a SOC 2 Type 2:

  • CC6 (Logical access) — how prompt injection and model-jailbreak risks are mitigated; segregation between training data, model artefacts, and inference paths.
  • CC7 (System operations) — model drift monitoring, hallucination rate tracking, incident response runbooks specific to AI failures.
  • CC8 (Change management) — model version control, prompt versioning, evaluation gates before deployment.
  • CC9 (Risk mitigation) — vendor risk for foundation-model providers, data-residency for training and inference.
  • Processing Integrity — output validation, citation grounding, refusal handling for out-of-scope queries.
  • Confidentiality — zero-retention contracts with model providers, customer data isolation in fine-tuning.

Cross-walk with ISO 42001. A single control narrative can satisfy both. Build the evidence once, present it twice. Practical mappings: 42001 A.7 (data management) ↔ SOC 2 CC6 + Confidentiality; 42001 A.6 (lifecycle) ↔ CC8; 42001 A.5 (impact assessment) ↔ CC3 risk assessment.

Effort estimate: SOC 2 Type 1 in 8–12 weeks once controls are documented; Type 2 requires the observation period (6–12 months) on top.

Next read: SOC 2 Type 2 for AI Companies 2026 for the full control-by-control evidence map.


Lesson 4 — AI Literacy: Article 4 Operator Obligations

Article 4 of the EU AI Act applied from 2 February 2025 and is the most under-prepared obligation in the entire regulation. It is short, sharp, and applies to every provider and deployer regardless of risk tier.

The text (paraphrased for brevity, verify the official version): providers and deployers shall take measures to ensure, to their best extent, a sufficient level of AI literacy among their staff and other persons dealing with the operation and use of AI systems on their behalf, considering their technical knowledge, experience, education, training, and the context in which the AI systems are used, and the persons or groups on which the AI systems are to be used.

What "sufficient" means in practice. The European AI Office published guidance in early 2025 clarifying that literacy is role-proportional. A board member needs different content than a prompt engineer or a customer-facing support agent.

Four audiences, four curricula:

  1. Executive leadership — risk landscape, liability exposure, strategic implications, governance accountability. ~2 hours.
  2. Technical staff (developers, data scientists, MLOps) — model lifecycle, evaluation, red-teaming, secure deployment, ISO 42001 controls. ~8 hours.
  3. Operational users (sales, marketing, support, HR using AI tools) — capabilities and limits, hallucination awareness, when to escalate, data-handling rules. ~3 hours.
  4. Affected persons interface (customer service, HR partners) — explainability, complaint handling, Article 86 right-to-explanation for high-risk decisions. ~4 hours.

The audit trail Article 4 demands:

  • A documented AI literacy policy with role definitions and required competencies.
  • Training records — who completed what, when, and the assessment score.
  • Refresher cadence — typically annual, plus event-driven (new model, new use case, new regulation).
  • Effectiveness measurement — not attendance, but post-training competence checks.
  • Vendor literacy — if a third party operates AI on your behalf, you must assess their literacy too.

Common pitfalls. A one-hour all-hands is not Article 4 compliance. A LinkedIn-Learning subscription is not Article 4 compliance. A signed acknowledgment that the employee "is aware AI exists" is, generously, theatre.

The defensible minimum: role-mapped curriculum, completion tracking in your LMS, post-training assessment, annual refresh, board-level metric reported quarterly.

Penalties. Article 99 fines for Article 4 non-compliance can reach €15 million or 3% of global annual turnover, whichever is higher.

Next read: AI Literacy Article 4 Enterprise Guide for the role-mapped curriculum and assessment templates.


Lesson 5 — AI Vendor Due Diligence Checklist

By 2026 most enterprises run between 30 and 200 AI features, and the majority of those are powered by third-party models, embeddings, or agent platforms. Vendor due diligence is where compliance is won or lost — your liability does not transfer when a sub-processor mishandles training data.

The seven-block checklist (use it verbatim in procurement workflows):

Block 1 — Regulatory posture.

  • AI Act self-classification of the vendor (provider, deployer, or both).
  • Confirmation of Article 6(3) assessments for any Annex III adjacency.
  • Conformity-assessment status and CE marking pathway for high-risk components.
  • For GPAI providers: copyright policy, training-data summary (Article 53), systemic-risk notification status.

Block 2 — Standards and attestations.

  • ISO/IEC 42001 certification (or roadmap with target date and auditor).
  • SOC 2 Type 2 with AI-relevant scope; ISO/IEC 27001; ISO/IEC 27701 for privacy.
  • ISO/IEC 23894 (AI risk management guidance) alignment evidence.
  • NIST AI Risk Management Framework (AI RMF 1.0) mapping for US-touching deployments.

Block 3 — Data governance.

  • Training-data provenance — opt-in, licensed, scraped, synthetic.
  • Customer-data retention policy at the inference layer (zero-retention should be the default).
  • Geographic processing — data residency, sub-processor list, EU-US Data Privacy Framework status.
  • Right to deletion and how it propagates to fine-tuned weights.

Block 4 — Security.

  • Prompt-injection and jailbreak testing cadence; red-team report excerpts.
  • Tenant isolation in multi-tenant inference.
  • Bring-your-own-key and customer-managed encryption support.
  • Vulnerability disclosure programme and recent CVE history.

Block 5 — Model behaviour.

  • Evaluation results on hallucination rate, bias benchmarks (e.g. BBQ, BOLD), and task-specific accuracy.
  • Watermarking and provenance signals (C2PA, SynthID) for generative outputs.
  • Refusal handling and content-policy disclosure.
  • Model card or system card per major release.

Block 6 — Incident response.

  • Article 73 serious-incident reporting capability (high-risk providers must notify within 15 days).
  • AI-specific incident runbooks (model corruption, prompt-injection breach, output liability).
  • Insurance coverage for AI-induced harm.

Block 7 — Contractual.

  • IP indemnity for outputs.
  • Audit rights (on-site or via SOC 2 + 42001 evidence packets).
  • Sub-processor change notification window.
  • Termination rights tied to compliance posture changes.

Scoring rubric. For each block, score 0 (absent), 1 (claimed but unverified), 2 (documented), 3 (independently attested). A vendor below 14/21 should not power a high-risk use case.

Reuse. Build the questionnaire once, store the responses in a vendor risk graph, refresh annually. The same evidence underpins your own SOC 2 and 42001 audits.

Next read: AI Compliance Checklist 2026 for the full operator checklist this vendor block plugs into.


Lesson 6 — Building the Audit Trail (the Knowlee 4Legals Approach)

The previous five lessons describe what to do. Lesson 6 is about how to prove you did it — at any moment, to any auditor, without a fire drill. The audit trail is the deliverable; everything else is preparation.

The seven layers of an AI audit trail:

  1. Inventory — every AI system, feature, and model in production with its owner, classification, vendor, and lifecycle stage. Single source of truth, not a spreadsheet.
  2. Classification record — for each system, the AI Act tier, Article 6(3) reasoning, GPAI status, NIST AI RMF profile, and the date of last review.
  3. Impact assessments — Article 27 fundamental-rights impact assessment for high-risk deployers, plus 42001 Clause 6 AI risk assessment, plus DPIA where personal data is in scope.
  4. Control evidence — for each control (42001 Annex A, SOC 2 TSC, internal policies), the artefact that proves it operated: log excerpts, screenshots, signed reviews, training completion records.
  5. Decision log — every model approval, every deployment gate, every Article 6(3) determination, every vendor onboarding. Timestamped, attributed, immutable.
  6. Incident log — Article 73 serious incidents, near-misses, customer complaints, output corrections, with root-cause analysis.
  7. Literacy log — Article 4 training records, role mapping, refresh dates, effectiveness scores.

Why most teams fail. They store evidence across Notion, Drive, Jira, Slack, and three SaaS GRC tools. When the auditor arrives, an analyst spends six weeks stitching it together. By then the controls have drifted.

The Knowlee 4Legals pattern. We treat the audit trail as a graph, not a folder. Every AI system, control, evidence artefact, person, and decision is a node; every relationship is typed. The graph is fed continuously by the operational systems that generate evidence (CI/CD, identity, LMS, ticketing, model registry) so the trail is current rather than reconstructed. When an auditor asks "show me the Article 6(3) reasoning for the recruitment screener and the literacy training of every person who touched it in 2026," the answer is a single Cypher query.

What this gives you:

  • Continuous compliance. Drift is detected the moment an evidence node goes stale.
  • Reusable narratives. The same control evidence answers AI Act, ISO 42001, SOC 2, NIST AI RMF, and customer security questionnaires.
  • Defensible automation. When AI agents themselves perform compliance work (e.g. drafting impact assessments), every action lands in the same trail with full reasoning capture.
  • Auditor-ready in minutes. The evidence packet is a query, not a project.

Where to start tomorrow morning:

  1. Inventory every AI system on a single page. If you cannot name them all, you do not yet have a programme.
  2. Pick one high-risk candidate and complete its full classification record this week.
  3. Run an Article 4 gap analysis on three roles: an executive, an engineer, an operational user.
  4. Score one critical vendor against the Lesson 5 rubric.
  5. Choose one control area (e.g. change management for prompts) and document the evidence flow end to end.

Five tasks, one week, and you have a credible programme nucleus. Everything else compounds from there.

Next read: AI Security Compliance Framework 2026 for the cross-walk between AI compliance and the security stack the audit trail rides on.


Course Completion

You have covered the four risk tiers and Annex III, the ISO/IEC 42001 management system, SOC 2 with AI-specific evidence, Article 4 literacy obligations, vendor due diligence, and the audit-trail architecture that ties them together.

Certificate of Completion (stub)

This certifies that [Your Name] completed the AI Compliance 30-Minute Self-Paced Course (2026) on [Date], covering EU AI Act risk classification, ISO/IEC 42001, SOC 2 Trust Services Criteria, Article 4 AI literacy, vendor due diligence, and audit-trail architecture.

Issued by Knowlee · v2026.04 · Verifiable via the lead-capture portal.

To download a personalised certificate plus the full materials pack — printable Annex III decision tree, ISO 42001 Annex A control workbook, SOC 2 AI-evidence cross-walk spreadsheet, Article 4 role-mapped curriculum, and the seven-block vendor questionnaire — request the lead-capture pack below.

Continue learning


Sources: Regulation (EU) 2024/1689 (AI Act, OJ L of 12 July 2024); ISO/IEC 42001:2023; AICPA Trust Services Criteria (2017, revised 2022) and 2024 SOC 2 AI reporting guidance; NIST AI Risk Management Framework 1.0 (January 2023); European AI Office Article 4 guidance (2025). Verify obligations against the official texts before relying on this summary for legal decisions.