AI Security Compliance Framework 2026: ISO 27001 × ISO 42001 × SOC 2 × AI Act Cross-Walk
Last updated: April 2026 · Category: AI Compliance · Author: Knowlee Team
AI security compliance in 2026 is no longer a single-framework conversation. The buyer who used to ask "are you SOC 2?" now sends a procurement questionnaire that references at least four standards: ISO/IEC 27001 for information security management, ISO/IEC 42001 for AI management systems, SOC 2 (AICPA Trust Services Criteria) for service-organization controls, and Regulation (EU) 2024/1689 — the EU AI Act — for any AI system that touches the European market.
The frameworks were not designed in isolation. ISO/IEC 42001:2023, published December 2023, was deliberately drafted as a sibling standard to ISO/IEC 27001 — it reuses the Annex SL high-level structure, and Annex D of 42001 maps controls to ISO 27001 and ISO 27701. The European Commission has stated that harmonised standards under the AI Act will lean heavily on ISO/IEC 42001 and ISO/IEC 23894 (AI risk management). The AICPA released supplemental SOC 2 guidance in 2024 that explicitly addresses AI system risks under the existing trust criteria. The frameworks are converging by design — but they are not the same framework, and treating them as if they were is the most expensive mistake a compliance program can make in 2026.
This guide is the cross-walk. It maps where the four overlap, where they don't, and the audit calendar that gets a vendor through all four without paying for the same evidence three times. Sources: ISO/IEC 27001:2022, ISO/IEC 42001:2023 (with Annex D mapping), AICPA Trust Services Criteria 2017 (revised 2022) plus 2024 AI considerations, and Regulation (EU) 2024/1689 official text. As of April 2026, ISO 42001 certification bodies are accredited under UKAS, ANAB, and ACCREDIA among others; AI Act high-risk obligations begin applying 2 August 2026 (Article 113).
The four frameworks at a glance
Before the cross-walk, the one-line scope of each. Conflating them — the "we already have SOC 2 so we don't need 42001" reflex — is the failure mode this section is meant to prevent.
ISO/IEC 27001:2022 — Information Security Management System (ISMS). Scope: confidentiality, integrity, availability of information assets across the entire organization. Target: any organization handling sensitive information. Certifying authority: accredited certification bodies under IAF MLA (UKAS, ANAB, ACCREDIA, DAkkS, etc.). Audit cycle: 3-year certification with annual surveillance audits. Who needs it: nearly every B2B vendor selling outside the United States; mandatory for many EU public-sector and financial-services tenders. The 2022 revision restructured Annex A from 114 controls into 93 controls grouped under four themes (organizational, people, physical, technological). It is the foundation every other framework on this list either references or assumes.
ISO/IEC 42001:2023 — AI Management System (AIMS). Scope: governance of AI systems across their lifecycle — from intent and design through deployment, monitoring, and retirement. Target: any organization developing, providing, or using AI systems. Certifying authority: same accredited certification bodies as 27001 (the schemes were released in 2024). Audit cycle: 3-year certification with annual surveillance, mirroring 27001. Who needs it: AI vendors who want a certifiable answer to "how do you govern your AI?" — increasingly demanded by enterprise procurement, and the most likely candidate for "presumption of conformity" with AI Act obligations once harmonised standards are published. Annex A of 42001 contains 38 AI-specific controls; Annex B is implementation guidance; Annex D is the explicit mapping to ISO 27001 and ISO 27701.
SOC 2 — Service Organization Controls 2 (AICPA). Scope: controls relevant to one or more of five trust criteria — Security (mandatory), Availability, Processing Integrity, Confidentiality, Privacy. Target: US-headquartered SaaS and service organizations, increasingly recognized globally as a procurement gate. Certifying authority: AICPA-licensed CPA firms; reports are attestations (Type 1 = point-in-time, Type 2 = observation period typically 6–12 months). No certification body in the ISO sense. Who needs it: any SaaS selling into US enterprise; the de facto procurement floor for North American buyers. The 2017 criteria (revised 2022) plus the AICPA's 2024 AI considerations document are the current authoritative texts.
Regulation (EU) 2024/1689 — the EU AI Act. Scope: AI systems placed on the EU market, classified by risk into prohibited (Article 5), high-risk (Article 6 + Annex III), limited-risk (transparency obligations under Article 50), and minimal-risk. Plus a separate regime for general-purpose AI models (Chapter V). Target: providers, deployers, importers, and distributors of AI systems in or into the EU. Authority: national market surveillance authorities under coordination of the AI Office (DG CNECT) and the European AI Board. Conformity route: self-assessment for most high-risk systems with internal control (Annex VI), notified-body assessment for biometric and certain other systems (Annex VII). Who needs it: any organization whose AI system reaches an EU user — extraterritorial scope under Article 2. Key date: 2 August 2026 = high-risk obligations begin applying; 2 August 2027 for high-risk systems already on the market before that date that are products under Annex I.
The frameworks differ in one fundamental way that the cross-walk has to respect: ISO and SOC 2 are voluntary standards adopted because the market demands them. The AI Act is law. Non-compliance with the AI Act carries administrative fines up to EUR 35 million or 7% of total worldwide annual turnover (Article 99) — penalties that exceed GDPR. You can choose not to pursue ISO 42001. You cannot choose not to comply with the AI Act if your system reaches the EU.
The cross-walk matrix
The matrix below maps fifteen control areas across the four frameworks. "Prime" means the framework treats the area as a primary, named obligation. "Inherited" means it is covered through another framework's controls referenced in scope. "Implicit" means an auditor will expect to see it under a broader control family but it is not separately named. "Conditional" means it applies only under certain triggers (typically high-risk classification under the AI Act, or a specific trust criterion in SOC 2).
| Control area | ISO 27001:2022 | ISO 42001:2023 | SOC 2 (TSC 2017 rev. 2022) | EU AI Act |
|---|---|---|---|---|
| Information security policy | Prime (Cl. 5.2, A.5.1) | Inherited via 27001 | Prime (CC1, CC2) | Implicit (Art. 9 risk management) |
| Risk management process | Prime (Cl. 6.1, A.5.7) | Prime (Cl. 6.1, A.5–A.6) | Prime (CC3.1–CC3.4) | Prime, AI-specific (Art. 9 high-risk) |
| AI-specific impact assessment | Not addressed | Prime (Cl. 6.1.4, A.5.4) | Not addressed | Prime (Art. 27 FRIA for high-risk) |
| Access control & identity | Prime (A.5.15–A.5.18, A.8.2–A.8.5) | Inherited | Prime (CC6.1–CC6.8) | Conditional (Art. 15 high-risk) |
| Cryptography | Prime (A.8.24) | Inherited | Implicit (CC6.1, CC6.7) | Conditional (Art. 15 robustness) |
| Logging & monitoring | Prime (A.8.15, A.8.16) | Inherited + AI-specific (A.6.2.8) | Prime (CC7.2) | Prime (Art. 12 record-keeping) |
| Incident response | Prime (A.5.24–A.5.28) | Prime + AI-specific (A.10.4) | Prime (CC7.3, CC7.4) | Prime (Art. 73 serious incident reporting) |
| Change management | Prime (A.8.32) | Prime (A.6.2.5 lifecycle) | Prime (CC8.1) | Conditional (Art. 16, 43 substantial modification) |
| Model versioning & lineage | Implicit (A.8.32) | Prime (A.6.2.5, A.7.4) | Implicit (CC8.1) | Conditional (Art. 11, Annex IV technical documentation) |
| Training data governance | Implicit (A.5.12) | Prime (A.7.2–A.7.6) | Conditional (Confidentiality, Privacy TSC) | Prime (Art. 10 high-risk) |
| Bias / fairness testing | Not addressed | Prime (A.6.2.4, A.7.5) | Not addressed (unless under Processing Integrity) | Prime (Art. 10, 15 high-risk) |
| Human oversight | Not addressed | Prime (A.9.3) | Implicit (CC2 governance) | Prime (Art. 14 high-risk) |
| Transparency to users | Not addressed | Prime (A.8.2–A.8.5) | Conditional (Privacy TSC) | Prime (Art. 13 high-risk, Art. 50 limited-risk) |
| Conformity assessment / external audit | Required (3-year cycle) | Required (3-year cycle) | Required (annual Type 2) | Required for high-risk (Annex VI/VII) |
| Public registration | Not required | Not required | Not required | Required (Art. 49 EU database for high-risk) |
Three patterns to extract from the matrix:
First, the foundation is shared. Information security policy, risk management, access control, cryptography, logging, incident response, change management — every framework requires these. An organization with a mature ISO 27001 ISMS is not starting from zero in any of the other three; it is starting at roughly 60% complete for the security-control surface of 42001 and SOC 2, and at the access-control + logging requirements of AI Act Articles 12 and 15.
Second, the AI-specific surface is genuinely additive. Bias testing, AI impact assessment, human oversight, training data governance, transparency obligations — these exist in ISO 42001 and the AI Act, and are mostly absent from ISO 27001 and SOC 2 (except where SOC 2 buyers insist on Processing Integrity or Privacy criteria). An organization with mature SOC 2 + 27001 controls but no AI management system has covered the security half and missed the AI-governance half entirely.
Third, the AI Act has obligations no voluntary framework imposes. Conformity assessment under Annex VI/VII, registration in the EU database under Article 49, serious incident reporting under Article 73 (within 15 days of awareness, or sooner for fundamental rights breaches), and post-market monitoring under Article 72 are regulatory requirements. ISO 42001 helps you meet them — likely with presumption of conformity once harmonised standards land — but does not replace them.
Where the frameworks don't overlap
The matrix shows convergence. The deltas show why all four exist. Understanding the deltas is what stops a compliance team from over-trusting one framework as substitute for another.
SOC 2 trust criteria not in ISO 42001. SOC 2's Privacy criterion (P1.0–P8.0) covers notice, choice, collection, use, retention, access, disclosure, and disposal of personal information at a granularity ISO 42001 does not match. SOC 2's Processing Integrity criterion (PI1.0–PI1.5) addresses completeness, accuracy, timeliness, and authorization of system processing — this is the criterion under which AICPA's 2024 AI guidance places hallucination, bias, and output-quality controls. Availability (A1.0–A1.3) covers performance, capacity, environmental protection, and recovery — ISO 42001 references availability through 27001 inheritance but does not impose SOC 2's specificity. An organization with ISO 42001 alone will fail a SOC 2 audit if Privacy or Processing Integrity is in scope.
AI Act obligations not in any other framework. Three are unique to the regulation. (1) Conformity assessment and CE marking under Articles 43–48: high-risk systems require a conformity assessment procedure (typically Annex VI internal control, Annex VII for biometric and listed AI under Article 6(1)) producing an EU declaration of conformity (Article 47) and CE marking (Article 48). No ISO standard or SOC 2 attestation produces a CE mark. (2) Public registration in the EU database under Article 49: providers and certain deployers of high-risk systems must register before placing on the market — public visibility, no equivalent elsewhere. (3) Serious incident reporting under Article 73: timeline-bound notification to market surveillance authorities (15 days standard, 2 days for widespread infringement, 10 days for serious-and-irreversible disruption, plus immediate for death). ISO 42001 incident response (A.10.4) is internal; Article 73 is regulatory.
ISO 42001 controls not in SOC 2. AI-specific impact assessment (Clause 6.1.4 + Annex A.5.4) is a standalone obligation that survives separately from generic risk assessment — it asks the organization to assess impacts on individuals, groups, and society from the AI system itself, including reasonably foreseeable misuse. SOC 2 has nothing equivalent. The lifecycle controls in Annex A.6 (objectives, resources, design, testing, release, operation, retirement) are an AI-system-development lifecycle requirement; SOC 2 covers change management generically. Bias and fairness testing under A.6.2.4 and A.7.5 has no SOC 2 analogue unless the auditor brings it under Processing Integrity.
ISO 27001 controls not assumed by SOC 2. Physical security (Annex A.7), supplier relationships including ICT supply chain (A.5.19–A.5.23), and threat intelligence (A.5.7) are present in 27001 with depth SOC 2 references at higher levels of abstraction. SOC 2 reports do not certify against a control catalog the way 27001 does.
The honest summary: there is no single framework that covers the full compliance surface a 2026 AI vendor faces. The frameworks were designed with different regulators, different audiences, and different theories of harm. They overlap because security is security and risk is risk; they diverge because privacy attestation, AI governance, and regulatory conformity are genuinely different problems.
The composing strategy
The cross-walk produces a sequencing decision. Stack the frameworks in this order to minimize duplicate work and maximize evidence reuse.
Layer 1 — ISO 27001 first, as foundation. 27001 is the substrate. Its risk-management process (Clause 6.1), Statement of Applicability (Clause 6.1.3), control implementation across the 93 Annex A controls, internal audit (Clause 9.2), management review (Clause 9.3), and continual improvement (Clause 10.1) give every subsequent framework a documented, auditable base. ISO 42001 explicitly references this in Annex D. SOC 2 auditors accept 27001 evidence as sufficient for many Common Criteria controls. AI Act Article 9 risk management can leverage 27001's risk register.
Layer 2 — ISO 42001 as AI-specific extension of the ISMS. Once 27001 is operational, 42001 is incremental — not net new. The AIMS uses the same Plan-Do-Check-Act loop, the same Annex SL clauses 4–10, and the same management-review cadence. The work that is genuinely new is the 38 controls in Annex A: AI policy (A.2), internal organization (A.3), resources for AI (A.4), AI impact assessment (A.5), AI system lifecycle (A.6), data for AI (A.7), information for interested parties (A.8), use of AI systems (A.9), third-party and customer relationships (A.10). Estimate roughly 6 months of additional work on top of a mature ISO 27001 program for a small-to-mid AI vendor; longer for organizations with multiple AI products.
Layer 3 — SOC 2 in parallel, scoped to procurement. SOC 2 sits beside the ISO programs rather than on top. Many controls are shared (the AICPA's TSP Section 100 mapping shows substantial overlap with ISO 27001), but the deliverable is different: SOC 2 is an attestation report a CPA firm produces, not a certificate. Type 1 covers design at a point in time; Type 2 covers operating effectiveness over an observation period (typically 6 months for first reports, 12 months thereafter). Run the Type 2 observation window concurrently with the second year of the 27001 + 42001 cycles — most of the operating evidence (logs, ticket trails, access reviews, incident records) is the same.
Layer 4 — AI Act conformity, only for EU-market high-risk systems. This is the regulatory layer, not voluntary, and its trigger is system classification under Article 6 + Annex III. If the AI system is high-risk, the conformity-assessment procedure under Article 43 is mandatory before placing on the market: Annex VI internal-control assessment for most high-risk categories, Annex VII (notified body) for biometric and listed AI systems. Once harmonised standards are published in the EU Official Journal — ISO 42001 and ISO 23894 are the leading candidates — Article 40 grants presumption of conformity for systems that follow them. Until then, the conformity assessment must demonstrate compliance to the AI Act articles directly.
Audit calendar — typical 24–36 months for the full stack. A realistic phased timeline for a small-to-mid AI vendor with no current certifications: months 1–6 ISO 27001 implementation and Stage 1 audit, month 6–9 Stage 2 audit and certification. Months 7–12 ISO 42001 gap analysis, control implementation, integration with the existing ISMS. Months 13–18 ISO 42001 Stage 1 + Stage 2 audits, in parallel with starting the SOC 2 Type 1 readiness assessment. Months 13–24 SOC 2 Type 1 then 12-month Type 2 observation window. Months 19–24 AI Act conformity assessment if the system is high-risk: technical documentation under Annex IV, quality management system under Article 17, post-market monitoring plan under Article 72. Faster timelines are possible for organizations with mature security programs already in place; rushing any single audit usually adds rework time downstream.
Audit playbook (months 1–24)
Phased operational checklist that maps onto the calendar above. This is a playbook, not a substitute for an accredited certification body or a CPA firm — engage both early.
Months 1–6 — ISO 27001 preparation and certification. Define ISMS scope (be deliberately narrow; scope creep doubles cost). Conduct gap analysis against ISO 27001:2022 Annex A. Build the risk register; produce the Statement of Applicability (Clause 6.1.3) with justification for every Annex A control included or excluded. Implement controls; produce documented policies covering all Annex A themes. Run internal audit (Clause 9.2) and management review (Clause 9.3). Engage an accredited certification body for Stage 1 (documentation review) and Stage 2 (implementation audit). Certificate issued at end of Stage 2, valid 3 years, surveillance audits annually.
Months 7–12 — ISO 42001 add-on. Inventory AI systems in scope and classify them. Conduct AI risk assessment (Clause 6.1) and AI system impact assessment (Clause 6.1.4) for each. Map existing 27001 controls to 42001 Annex A using Annex D; identify the gaps — typically AI policy (A.2), AI impact assessment (A.5.4), lifecycle controls (A.6), data governance (A.7), transparency to users (A.8), human oversight (A.9.3). Implement gap controls. Update the management-review cycle to cover AIMS objectives separately from ISMS objectives. Engage the certification body for 42001 Stage 1 and Stage 2; ideally same body as 27001 to consolidate audit days.
Months 13–18 — SOC 2 Type 1 then Type 2 observation begins. Determine trust criteria in scope (Security mandatory; Availability, Confidentiality, Privacy, Processing Integrity optional based on buyer demand). Map controls between SOC 2 TSC and the existing 27001 + 42001 control set; the AICPA mapping document plus the ISO 42001 Annex D mapping eliminate most duplicated work. Draft the system description (the narrative section auditors evaluate). Engage a licensed CPA firm; complete Type 1 attestation (point-in-time design assessment). Begin the Type 2 observation period — typically 6 months for the first report, 12 months thereafter.
Months 19–24 — AI Act conformity assessment, if high-risk. Confirm Article 6 + Annex III classification. Build technical documentation per Annex IV: general description, data and data governance, monitoring and control, change documentation, performance metrics, post-market monitoring plan. Establish quality management system per Article 17 (largely satisfied by the integrated ISO 27001 + 42001 ISMS/AIMS). Conduct the conformity assessment procedure: Annex VI internal control for most categories; Annex VII notified-body assessment for biometric and listed AI under Article 6(1). Issue the EU declaration of conformity (Article 47), apply CE marking (Article 48), register the system in the EU database (Article 49). Stand up post-market monitoring (Article 72) and serious-incident reporting (Article 73) processes.
After month 24, the cycle continues: 27001 + 42001 surveillance audits annually, recertification at year 3; SOC 2 Type 2 reports renewed annually; AI Act post-market monitoring continuous, with re-conformity-assessment on any substantial modification (Articles 16(g), 43(4)).
Tooling stack
The compliance stack is bigger than tools; tools accelerate evidence collection and policy management but do not replace the human judgement an auditor evaluates.
For SOC 2 + ISO 27001 evidence automation. Vanta, Drata, and Secureframe are the dominant compliance-automation platforms in this segment as of April 2026. They connect to cloud providers, identity providers, ticketing systems, and HR systems, then continuously collect evidence against a control framework. Strengths: rapid time-to-readiness for SOC 2 and ISO 27001, especially for cloud-native organizations. Limitations: their AI-specific control coverage for ISO 42001 and the AI Act is still maturing as of April 2026. Most of their 42001 modules launched in late 2024 / 2025 and are catching up to the depth they offer for 27001.
For ISO 42001 and AI Act compliance. The market is split between general compliance platforms adding AI modules, AI-governance specialists (Credo AI, Holistic AI, Trustible, FairNow among others), and document-and-control platforms designed for AI Act conformity assessment specifically. Selection criteria worth applying: support for the Annex IV technical documentation structure, support for serious-incident reporting templates, integration with the AIMS management-review cycle, and a clear position on whether the platform itself is a high-risk AI system (which would put it under the same regulation it claims to help with).
For knowledge graph and audit trail. ISO 42001 A.6.2.5 (lifecycle of AI systems), A.6.2.8 (logging of AI system events), and AI Act Article 12 (record-keeping) all push toward a queryable, time-stamped record of every action taken against an AI system. A graph-shaped audit log — linking models, datasets, training runs, deployments, decisions, and incidents — is increasingly the de facto implementation. Platforms include Knowlee 4Legals (covered below), Collibra AI Governance, and IBM watsonx.governance.
For SOC 2 Type 2 evidence collection specifically. The CPA firm conducting the attestation will typically have a preferred toolchain or workflow; defer to them on observation-period evidence collection mechanics rather than picking a platform that conflicts with how the firm reads evidence.
Knowlee 4Legals positioning (conflict-of-interest disclosure)
This article is published by the Knowlee team. Knowlee operates 4Legals, a vertical built on the Knowlee OS platform that addresses ISO 42001 and AI Act compliance specifically. We are an interested party. The cross-walk above is framework-neutral; this section is the disclosure of where our product fits.
Knowlee 4Legals is positioned as the AI-governance management layer for organizations stacking ISO 42001 and AI Act conformity on top of an existing ISO 27001 + SOC 2 program. The product covers: AI system inventory and risk classification (Article 6 / Annex III screening); AI impact assessment workflow aligned to ISO 42001 Clause 6.1.4 and AI Act Article 27 fundamental rights impact assessment; technical documentation generation along the Annex IV structure; serious incident reporting templates and timeline tracking against Article 73 deadlines; post-market monitoring plan templates per Article 72; and an integrated audit trail backed by the Knowlee Brain (a graph-shaped memory layer the OS uses across all verticals).
What 4Legals does not do: replace an accredited certification body (you still need one for ISO 42001 certification), replace a notified body for Annex VII conformity assessment, replace a CPA firm for SOC 2 attestation, or constitute legal advice on AI Act applicability. We are the operating layer between the legal and audit experts and the day-to-day evidence those experts need.
The product is built on the same Knowlee OS substrate that runs our 4Sales and d360 verticals — meaning the audit trail it produces is itself ISO 27001-aligned, with documented risk classification, data categories, and human-oversight requirements on every automated job. Pricing, deployment options (managed and self-hosted), and integration scope are documented separately; this article is a framework guide, not a sales page.
FAQ
Do I need ISO 42001 if I already have ISO 27001 and SOC 2? Not legally — neither is mandatory. Commercially, increasingly yes for AI vendors. ISO 42001 covers AI-specific obligations (impact assessment, lifecycle, transparency, human oversight) that 27001 and SOC 2 do not address. Enterprise procurement teams in 2026 are starting to ask for it explicitly.
Does ISO 42001 certification mean I'm AI Act compliant? Not yet, but probably soon. Article 40 of the AI Act provides that AI systems conforming to harmonised standards published in the EU Official Journal are presumed compliant with the relevant requirements. ISO 42001 (and ISO 23894) are leading candidates for harmonisation but, as of April 2026, the formal harmonisation process is still in progress. Even after harmonisation, the AI Act still requires conformity assessment, EU declaration of conformity, CE marking, and registration — these obligations exist regardless of which standard is followed.
Can I do SOC 2 Type 2 and ISO 27001 simultaneously? Yes, and most mature compliance programs do. The control overlap is substantial, the evidence overlaps even more, and most CPA firms and certification bodies are familiar with the combined approach. Start the ISO 27001 audit first (it is more prescriptive), then run SOC 2 Type 2 over a 6–12 month observation window after the ISMS is operational.
What is the cost difference between SOC 2 and ISO 27001? It depends on organization size, scope, and chosen auditor. As of April 2026, typical first-year ranges quoted in market: SOC 2 Type 2 attestation EUR 25–60k for SMB SaaS, ISO 27001 certification EUR 30–70k for similar scope. Adding ISO 42001 to an existing 27001 program is usually 30–50% incremental on the 27001 cost. AI Act conformity assessment cost varies wildly with notified-body involvement and Annex IV documentation depth.
Is the AI Act extraterritorial? Yes. Article 2 applies to providers placing AI systems on the EU market regardless of where they are established, and to providers and deployers outside the EU when the output produced by the AI system is used in the EU. Non-EU vendors with EU users have full obligations.
Do ISO 42001 and AI Act apply to general-purpose AI models? ISO 42001 applies to any organization using or providing AI systems, including foundation models. The AI Act has a separate regime in Chapter V for general-purpose AI models, with additional obligations for models with systemic risk (Article 51); these obligations began applying 2 August 2025, ahead of the high-risk regime.
Conclusion
AI security compliance in 2026 is not a single decision; it is a sequencing decision. ISO 27001 is the foundation every other framework either references or assumes. ISO 42001 extends the ISMS into AI-specific governance with controls 27001 and SOC 2 do not cover. SOC 2 is the procurement-floor attestation for North American buyers and adds privacy and processing integrity coverage that ISO 42001 leaves to the buyer's interpretation. The EU AI Act is law, with public registration, conformity assessment, and incident reporting obligations that no voluntary framework replicates.
The cross-walk above is the planning artifact. The audit calendar is how the work fits together. The tooling stack is how evidence is collected without armies of compliance staff. None of it removes the underlying obligation: AI systems that affect people require governance, and governance is a daily practice, not an annual report.
If your organization is building toward this stack and looking for an operating layer that connects ISO 42001 control evidence, AI Act technical documentation, and the audit trail your 27001 ISMS already produces, Knowlee 4Legals is built for that gap — disclosure noted above. Frameworks first, tools second.
Related reading
- ISO 42001 Checklist for AI Management
- ISO 42001 vs SOC 2 vs ISO 27001 Comparison
- ISO 42001 Implementation Guide
- SOC 2 Type 2 for AI Companies (2026)
- AI Act Compliance Software Guide
- AI Compliance Checklist 2026
- NIST AI RMF Implementation Guide
- AI Governance Framework
Sources
- ISO/IEC 27001:2022, Information security management systems — Requirements.
- ISO/IEC 42001:2023, Artificial intelligence — Management system (including Annex A controls and Annex D mapping to ISO 27001 / 27701).
- ISO/IEC 23894:2023, Information technology — Artificial intelligence — Guidance on risk management.
- AICPA, Trust Services Criteria (2017, revised 2022); AICPA AAG-SOC 2 (2023 edition); AICPA AI considerations supplement (2024).
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 (Artificial Intelligence Act), Official Journal of the European Union L series, 12 July 2024.
- European Commission communications and AI Office publications regarding harmonised standards under Article 40 of Regulation (EU) 2024/1689 (status as of April 2026).