AI Compliance for Banking: Where DORA Meets the AI Act (2026)

Not legal advice — consult qualified counsel. This article maps two complex EU regulations as understood from their published texts and supervisory guidance as of April 2026. National transpositions, supervisory expectations, and EBA guidelines continue to evolve. Use it to scope the conversation with your legal and risk teams, not to replace it.

For an EU bank deploying AI in 2026, the regulatory question is no longer "are we compliant with the AI Act?" The question is: how do we satisfy DORA and the AI Act, jointly, on the same AI system, with one evidence pack?

The Digital Operational Resilience Act — Regulation (EU) 2022/2554 — and the EU AI Act — Regulation (EU) 2024/1689 — were drafted in parallel, by different EU institutions, with different primary objectives. DORA hardens financial entities against ICT-related disruptions. The AI Act constrains how AI systems are designed, deployed, and operated regardless of sector. They overlap meaningfully in three places: incident reporting, third-party risk, and board-level oversight. They diverge in scope, in supervisor, and in the artifacts they expect a regulated entity to produce.

This guide is for banking compliance officers, CROs, ICT risk officers, and AI governance leads inside credit institutions, payment institutions, and investment firms within DORA scope. It maps the article-level overlap, walks through three worked AI deployments — loan-decisioning, fraud-detection, customer-service chatbot — and ends with a deployment timeline a bank can hand to its 2026 program plan.

Companion reading: /blog/eu-ai-act-business-guide, /blog/ai-act-financial-services-compliance, /blog/ai-compliance-checklist-2026, /blog/ai-act-compliance-software-guide.


Why DORA and the AI Act Land on the Same Desk

DORA applies to financial entities listed in Article 2 of Regulation (EU) 2022/2554 — credit institutions, payment institutions, electronic money institutions, investment firms, crypto-asset service providers, central counterparties, trading venues, trade repositories, insurance and reinsurance undertakings, crowdfunding service providers, and others. It became fully applicable on 17 January 2025.

The AI Act applies to providers and deployers of AI systems placed on the EU market or whose output is used in the EU, regardless of sector. Its core obligations on high-risk systems apply from 2 August 2026, with general-purpose AI obligations already in force since 2 August 2025 and prohibitions since 2 February 2025.

A bank deploying AI for credit decisioning, fraud detection, or customer interaction therefore sits inside both regulations simultaneously. The AI system is, almost by definition, an "ICT asset" under DORA Article 3(7) — an "ICT-related concept" supporting a critical or important business function. It is also, in most banking use cases, a "high-risk AI system" under AI Act Article 6 read with Annex III (notably Annex III.5(b) on creditworthiness evaluation).

This dual-classification has three operational consequences:

  1. Two supervisors look at the same system. The bank's prudential supervisor (national competent authority, ECB for significant institutions) enforces DORA. The AI Act's market-surveillance authority — for credit institutions designated under EU financial law, this is the same prudential supervisor by virtue of AI Act Article 74(6) — enforces the AI Act. The AI Act's drafters foresaw this and routed AI-Act enforcement for banks through the existing prudential authority rather than creating a parallel one.
  2. The evidence packs overlap but are not identical. DORA expects an ICT risk-management framework (Article 6), incident reports (Article 19), TLPT results (Article 26), and a third-party register (Article 28). The AI Act expects a risk-management system (Article 9), automatic operation logs (Article 12), instructions for use (Article 13), human oversight evidence (Article 14), QMS records (Article 17), and post-market monitoring (Article 72). A mature bank produces both from the same underlying telemetry — once.
  3. The board cannot delegate the joint view. DORA Article 5 makes the management body ultimately accountable for ICT risk. AI Act Article 26(2) makes deployers responsible for human oversight. AI Act Article 27 imposes a fundamental rights impact assessment for certain high-risk uses. None of these accountabilities are delegable downward in a way that survives an audit.

The Three Articles Where the Regulations Most Overlap

The full article matrices for DORA and the AI Act are large. Three article pairs are where the practical compliance work converges.

Incident reporting — DORA Article 19 vs AI Act Article 73

DORA Article 19 requires financial entities to classify ICT-related incidents (per the criteria of Article 18 and the relevant RTS), report major ICT-related incidents to their competent authority within strict timelines (initial notification within hours, intermediate report at 72 hours, final report typically within one month of resolution per the EBA RTS), and notify clients where appropriate.

AI Act Article 73 requires providers of high-risk AI systems to report serious incidents to the market-surveillance authority of the Member State where the incident occurred — generally within 15 days, with shorter timelines (immediately and no later than 2 days) for incidents involving infringement of Union law protecting fundamental rights, and 10 days for incidents that result in death.

For a banking AI system that fails — say, a credit-decision model produces a wave of unjustified rejections affecting protected categories — both clocks start running. DORA fires because the operational disruption is an ICT incident. The AI Act fires because the failure produced harm of a kind Article 3(49) defines as "serious incident" (infringement of fundamental rights).

The practical implication: the bank's incident-handling runbook must produce one source-of-truth incident timeline (timestamps, classification, root cause, customers affected, remediation) and split it into two notification packets with different recipients and different deadlines. The cheapest way to fail this is to have the AI team's incident log and the ICT-risk team's incident log diverge.

Third-party risk — DORA Articles 28–30 vs AI Act Article 25

DORA Articles 28–30 require financial entities to maintain a register of all contractual arrangements on the use of ICT services provided by ICT third-party service providers (Article 28(3)), conduct due diligence before entering arrangements (Article 28(4)), specify mandatory contractual provisions (Article 30) — including audit rights, exit strategies, sub-contracting controls — and notify supervisors about arrangements supporting critical or important functions. ICT third-party providers designated as critical under Articles 31–44 fall under the EU Oversight Framework run by ESAs.

AI Act Article 25 governs allocation of obligations along the AI value chain. A bank that integrates a third-party AI model into a high-risk system can become the provider of that high-risk system if it puts the system on the market under its own name or trademark, makes a substantial modification, or modifies the intended purpose. Article 25(4) further specifies that providers of general-purpose AI models must support downstream providers with technical documentation. AI Act Article 53 imposes specific transparency obligations on GPAI model providers.

The overlap: when a bank buys an AI model from a vendor (an LLM, a credit-decisioning model, a fraud-screening service), the vendor is simultaneously an ICT third-party service provider under DORA and either an AI system provider or a GPAI model provider under the AI Act. The bank must:

  • Register the arrangement in the DORA Article 28 register.
  • Apply the Article 30 mandatory contractual provisions (audit rights, exit, sub-contracting, ICT incident notification).
  • Determine its own AI Act role — deployer, or provider-by-substantial-modification.
  • Obtain the vendor's AI Act technical documentation (Annex IV for high-risk; Annex XI for GPAI) and verify it.
  • Push the vendor's documentation through the bank's QMS — see AI Act Article 17 and ISO/IEC 42001:2024.

The trap: contracts negotiated under DORA templates often do not include AI Act-specific clauses (model-card delivery, post-deployment monitoring data sharing, retraining notifications). DORA contracts that pre-date the bank's AI Act readiness program need a contractual addendum, not a fresh negotiation.

Board-level oversight — DORA Article 5 vs AI Act Article 26(2) and Article 14

DORA Article 5(1) says the management body of the financial entity "shall define, approve, oversee and be responsible for the implementation" of the ICT risk-management framework. Article 5(2) lists specific responsibilities: setting risk appetite, approving policies, allocating budget, integrating ICT risk into the overall risk-management framework, oversight of incident reporting.

AI Act Article 26(2) says deployers "shall assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support." Article 14 requires that high-risk AI systems be designed and developed to enable effective human oversight by the natural persons assigned. Article 27 requires a fundamental rights impact assessment for deployers in scope.

The board cannot personally do oversight. It can — and under both regulations, must — define who does, with what competence, with what authority to override or stop the system, and with what reporting line up to the board. Joint board reporting is the cheapest way to satisfy both. A separate "ICT risk committee" report and "AI ethics committee" report often produce contradictory pictures. A unified AI-and-ICT-resilience committee report with both DORA and AI Act KPIs surfaces the joint risk picture in one slide.


Worked Example 1: Loan-Decisioning AI

A retail bank deploys an AI system that scores applicants for personal loans up to €30,000. The system ingests bureau data, internal transaction history, and current-account behavior, produces a decision recommendation (approve / decline / refer-to-human), and is integrated into the loan-origination workflow.

AI Act classification. High-risk under Article 6 read with Annex III.5(b) — "AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud."

DORA classification. The system supports a critical or important business function (consumer credit origination). It is therefore in scope of DORA's full ICT risk-management framework and subject to the third-party-register obligations of Article 28 if any component is provided by an ICT third party.

Articles that fire under both regulations:

  • AI Act Article 9 — risk management system. Identify and mitigate risks across the lifecycle: bias against protected categories, drift as bureau data composition changes, gaming by intermediaries, hallucinations from any LLM components in the explanation layer.
  • AI Act Article 10 — data governance. Training, validation, and testing data must meet the quality criteria of Article 10(3), including representativeness and the examination of possible biases.
  • AI Act Article 12 — automatic logs. Every scoring decision, model version, input fingerprint, and human override.
  • AI Act Article 14 — human oversight. The "refer-to-human" pathway is the human oversight gate; the human reviewer's identity, decision, and time must be logged.
  • AI Act Article 15 — accuracy, robustness, and cybersecurity. Performance metrics declared in the technical documentation; cybersecurity controls aligned with state-of-the-art.
  • AI Act Article 27 — fundamental rights impact assessment. As a deployer governed by EU law providing essential private services, the bank performs FRIA before first deployment.
  • AI Act Article 86 — right to explanation. The applicant whose loan is declined has a right to a clear and meaningful explanation of the role of the AI system in the decision.
  • DORA Article 6 — ICT risk-management framework. The credit-scoring system is an ICT asset; framework documentation must cover its lifecycle.
  • DORA Article 8 — identification of ICT-supported business functions. The loan origination function is critical or important; mapping is mandatory.
  • DORA Article 9 — protection and prevention. ICT security policies, identity and access management, encryption, vulnerability management apply.
  • DORA Article 17 — ICT-related incident management process.
  • DORA Article 19 — major incident reporting if a model failure produces an event meeting the major-incident threshold.
  • DORA Article 24–26 — testing of digital operational resilience, including TLPT for significant institutions.
  • DORA Article 28 — ICT third-party register entries for any component (training data vendor, MLOps platform, foundation-model provider).

Who is accountable. Under DORA, the management body (Article 5) is ultimately responsible; the ICT risk function and the head of ICT risk operate the framework. Under the AI Act, the bank is a deployer (Article 26) — and may be a provider if it has materially adapted a third-party model. The compliance officer signs off the FRIA (Article 27) and ensures Article 14 oversight is enforced; a designated Chief AI Officer, Head of Model Risk, or Head of Compliance owns the AI Act register entry.

Deployment timeline. A bank designing this system in Q2 2026 with a target go-live of Q1 2027 should expect: 8–12 weeks for initial FRIA and Article 9 risk-management documentation; 4–8 weeks for Article 10 data-governance evidence; 6–10 weeks for Article 15 accuracy/robustness testing; ongoing for Article 12 audit-trail wiring (this is an architecture decision, not a phase). DORA's continuous obligations — incident management, third-party register, resilience testing — must be operational at go-live, not added later.


Worked Example 2: Fraud-Detection AI

The same bank deploys a real-time fraud-detection system on card transactions. The model scores every authorization request and either passes, holds for review, or declines. Latency budget: under 200 ms.

AI Act classification. Annex III.5(b) explicitly excludes AI systems used for the purpose of detecting financial fraud. The fraud-detection model is therefore not automatically high-risk under the credit-evaluation category.

This is a frequent misclassification. Compliance teams that read Annex III too quickly classify any banking AI as Annex III.5 and impose the full high-risk regime on a system that the regulator has explicitly carved out. The carve-out is narrow: the model must be genuinely for fraud detection, not creditworthiness. If the same model is also used to influence a credit limit decision, the carve-out evaporates.

The fraud-detection system might still be high-risk if it falls under another Annex III category (it generally does not for outbound card-payment fraud), or if it counts as an AI system intended to evaluate the eligibility of natural persons for essential private services under Annex III.5(a) — which credit cards arguably are. Most legal opinions in 2026 treat real-time card-fraud declines as falling outside Annex III.5(a) when the decision is reversible and a customer-service path exists, but this is the kind of question to escalate to counsel rather than resolve in-house.

Articles that fire regardless of high-risk classification:

  • AI Act Article 50 — transparency obligations may apply if the system interacts directly with natural persons (e.g., a customer-service touchpoint that explains the fraud decline).
  • AI Act Article 4 — AI literacy. The bank must ensure staff and contractors using the system have a sufficient level of AI literacy.
  • DORA Article 6, 8, 9, 17, 19, 24, 28 — all apply. The fraud-detection system is an ICT asset supporting a critical function (payments); DORA does not care whether the AI Act classifies it as high-risk.
  • GDPR Articles 22, 35 — automated decision-making with legal or similarly significant effects, plus DPIA. (Outside the AI Act, but always present in this discussion.)

If the system is classified high-risk (because the bank's legal opinion places it inside Annex III.5(a)), all the Article 9 / 12 / 14 / 15 / 27 obligations from Example 1 apply.

Who is accountable. Same construction as Example 1. The fraud function (typically inside Risk or Operations) operates the system; ICT risk maintains the DORA framework view; the AI governance officer maintains the AI Act register entry; the management body retains overall accountability.

Deployment timeline. Faster than credit decisioning if classified outside high-risk — DORA work is the constraint, AI Act work is the smaller package (Article 4 literacy, Article 50 transparency where applicable, voluntary application of high-risk-style controls as an internal best practice). 4–8 weeks for DORA framework integration; 2–4 weeks for Article 4 / 50 documentation; longer if the bank chooses to apply Article 9-style risk management as a defensive measure.


Worked Example 3: Customer-Service Chatbot

The bank deploys a customer-facing chatbot for general account inquiries (balance, transaction history, branch hours, lost-card flow). The chatbot is built on a third-party general-purpose AI model with retrieval-augmented generation against the bank's knowledge base.

AI Act classification. The chatbot is not automatically high-risk under Annex III for general account inquiries. It is subject to:

  • Article 50(1) — transparency: users must be informed they are interacting with an AI system unless this is obvious from context.
  • Article 50(2) — providers of AI systems generating synthetic content must mark outputs as artificially generated in machine-readable form.
  • Article 53 — the underlying GPAI model provider has its own obligations (technical documentation, copyright policy, training-data summaries).
  • Article 55 — if the GPAI model is classified as posing systemic risk, additional obligations attach to the provider.

The chatbot becomes high-risk if it is repurposed for use cases listed in Annex III. The most likely path: if the chatbot is later used to triage credit-related inquiries in a way that influences eligibility decisions, it crosses into Annex III.5(b). The boundary is the substantial modification trigger of Article 25(1) — at which point the bank may itself become the provider of the modified system.

DORA classification. The chatbot supports customer service — typically classified as a critical or important function for retail banks. DORA Articles 6, 8, 9, 17, 19, 28 apply.

Specific articles that fire:

  • AI Act Article 50(1) — chatbot must disclose it is an AI to the customer at session start.
  • AI Act Article 50(2) — synthetic-content marking where applicable.
  • AI Act Article 4 — AI literacy for the staff designing prompt templates and the customer-service supervisors who handle escalations.
  • AI Act Article 25 — value-chain allocation; the GPAI provider's technical documentation must be obtained and reviewed.
  • AI Act Article 53 — the bank verifies the GPAI provider's compliance with its own obligations (this is a due-diligence step, not a re-execution of the provider's work).
  • DORA Article 28 — the GPAI provider is an ICT third-party service provider; if it supports a critical function (it does, because the chatbot supports customer service), the contract must satisfy Article 30's mandatory provisions. Audit-right clauses for major foundation-model providers are an active area of negotiation in 2026.
  • DORA Article 30(2)(d) — exit strategy. Banks must be able to terminate the GPAI arrangement without disproportionate disruption — which is non-trivial when the GPAI provider's API is the runtime.
  • GDPR Articles 5, 6, 13, 22 — every chatbot deployment has GDPR obligations regardless of AI Act / DORA.

Who is accountable. The customer-service business owner runs the chatbot day-to-day. ICT risk maintains the DORA register entry. AI governance maintains the Article 50 transparency posture and the Article 25 value-chain documentation. Procurement (with Legal) negotiates the Article 30 contract clauses with the GPAI provider. Board-level oversight is exercised through the same joint reporting line as Example 1 and 2.

Deployment timeline. Chatbot programs often go live faster than the regulatory work catches up. A defensible 2026 sequence: pre-deployment, complete the Article 25 value-chain review and DORA Article 28 third-party assessment (4–6 weeks); at deployment, ensure Article 50 disclosures are present and DORA contract clauses are in place; post-deployment, run continuous AI literacy and Article 50(2) synthetic-content marking; six months in, re-assess whether the use case has drifted into Annex III territory.


The Joint Evidence Pack: Build It Once, Use It Twice

A bank that runs two separate compliance programs — DORA inside ICT risk, AI Act inside legal/compliance — pays twice and proves compliance once. The pattern that scales is a joint evidence pack built from the AI system's runtime telemetry, with two views layered on top.

What goes into the joint evidence pack:

  1. System inventory. Every AI system used by the bank, classified under both AI Act Article 6 and DORA Article 8 (critical-function mapping). One row per system, two columns of regulatory metadata.
  2. Audit trail (per-inference). Model version, input fingerprint, output, operator identity, timestamp, business context. The same JSONL stream feeds AI Act Article 12 evidence and DORA's ICT-incident-investigation forensics.
  3. Incident timeline. A unified incident record that splits into a DORA Article 19 packet and an AI Act Article 73 packet at notification time.
  4. Third-party register. One register that satisfies DORA Article 28 with extra columns for AI Act Article 25 (provider/deployer designation, GPAI flag, technical-documentation URL).
  5. Risk register. A risk register satisfying both AI Act Article 9 and DORA Article 6, organized by AI system, with each entry linking to controls and to evidence.
  6. Oversight log. Approvals, overrides, escalations — one log feeding AI Act Article 14 evidence and DORA's "human-in-the-loop" controls.
  7. Resilience-testing results. TLPT, ICT scenario tests, and AI-Act-style robustness tests (Article 15) tracked side by side.
  8. FRIA. Fundamental Rights Impact Assessment per AI Act Article 27, linked to the system inventory.

Compliance suite vendors (Vanta, OneTrust, Drata) provide the policy-and-control layer that holds this together. AI runtime platforms — Knowlee being one — produce the per-inference evidence that the policy layer cannot fabricate. The two layers compose. They do not substitute for each other. See /blog/ai-act-compliance-software-guide for the buyer framework that makes this concrete, and /compare/knowlee-vs-vanta-onetrust for how the layers fit together in a bank's compliance stack.


Where Banks Most Often Get This Wrong

Three failure modes recur in 2026 banking AI Act / DORA programs.

Failure mode 1: Treating the AI Act as a "GDPR plus." GDPR governs personal-data processing. The AI Act governs AI systems regardless of personal-data involvement. A model that scores transactions (no personal-data processing of customers) is in DORA scope, can be in AI Act scope, and may be entirely outside GDPR scope. Treating AI-Act work as a sub-clause of the existing GDPR / DPIA program produces compliance gaps at the edges — particularly around Article 12 logs and Article 14 oversight, neither of which has a GDPR equivalent.

Failure mode 2: Splitting AI compliance and ICT risk into separate programs. When the AI team and the ICT-risk team meet only at quarterly reviews, the incident clock under DORA Article 19 and the incident clock under AI Act Article 73 drift out of sync. The first major incident reveals it. The fix is structural: an integrated AI-and-ICT-resilience function or, at minimum, a shared incident runbook with two-clock awareness baked in.

Failure mode 3: Buying a compliance suite and assuming it produces evidence. Compliance suites store policies and trigger questionnaires. They do not, on their own, produce the per-inference audit trail that DORA forensics and AI Act Article 12 demand. The evidence comes from the AI runtime; the suite catalogs it. Banks that procure a suite without procuring a runtime-telemetry strategy find the gap at audit, not at procurement.


Where Knowlee Fits in a Bank's Joint Compliance Stack

Knowlee positions as AI Act Ready by Design: a runtime where audit trail, risk classification, human oversight, and approval signatures are first-class primitives of the system, not a layer added afterward. For a bank, the relevant capabilities are:

  • Per-inference audit trail. Every AI tool call, model response, and human override streams as JSONL through claude-runner.js. The same stream feeds DORA forensics and AI Act Article 12 evidence.
  • Human-oversight enforcement. Jobs flagged human_oversight_required: true cannot run without a recorded approved_by signature — the cron scheduler refuses them. Article 14 enforcement is technical, not procedural.
  • Risk classification at the job level. Every automated AI workload declares its risk level and data categories before it can run. The system inventory and the AI Act Article 6 mapping are the same artifact.
  • Per-vertical data isolation. Each Knowlee vertical (4Sales, 4Talents, customer service, etc.) runs against its own dedicated database. The blast radius of an incident is bounded by vertical, not by RBAC alone — relevant under DORA Articles 9 and 17.
  • Third-party documentation routing. GPAI provider documentation (Article 53) and ICT third-party contract evidence (DORA Article 28) live alongside the runtime they govern.

Knowlee is one runtime layer in a broader stack. Compliance suites (Vanta, OneTrust, Drata) sit above it for policy and control. Core banking systems sit below it for transaction execution. The bank's prudential supervisor and the AI-Act market-surveillance function — for credit institutions, the same authority — read the joint evidence pack the stack produces.


FAQ

Are all banking AI systems automatically high-risk under the AI Act?

No. Annex III.5(b) covers AI systems used to evaluate creditworthiness or establish credit scores — this is the article that pulls retail and SME credit decisioning into high-risk. Annex III.5(b) explicitly excludes systems for detecting financial fraud. Other banking AI use cases — customer-service chatbots, internal productivity tools, marketing personalization — are typically not Annex III high-risk on their own, but Articles 4 (AI literacy), 50 (transparency), and 53 (GPAI) still apply, and the systems remain in DORA scope as ICT assets.

Does DORA cover AI specifically, or only ICT generally?

DORA does not contain AI-specific obligations. It treats AI systems as ICT assets and applies the same framework — risk management, incident reporting, third-party oversight, resilience testing — that it applies to any other ICT component supporting financial services. The AI-specific obligations come from the AI Act. The two regulations compose: an AI system in a bank is governed by both at once.

When a fraud-detection model and a credit-scoring model use the same training data, are they treated the same way?

No. Annex III classification is by intended purpose, not by underlying technology. A model used purely for fraud detection sits inside the Annex III.5(b) carve-out. The same model architecture used to score creditworthiness sits inside Annex III.5(b) as high-risk. If a single model is used for both purposes, the system is high-risk — the carve-out applies only to the genuine fraud-detection use case.

Who is the supervisor for AI Act enforcement at a bank?

For credit institutions designated under EU financial law, AI Act Article 74(6) routes market-surveillance functions through the existing prudential supervisor — the national competent authority, or the ECB for significant institutions under the Single Supervisory Mechanism. The bank's AI Act conversation is therefore with the same supervisor who already runs its prudential and DORA reviews. Member States may publish further specifics on how this is operationalized.

What happens when a third-party AI provider has an incident — who reports?

Under DORA, the financial entity reports major ICT-related incidents to its competent authority (Article 19), regardless of whether the incident originated at a third-party provider. The third-party provider's own reporting goes through the EU Oversight Framework if the provider is designated critical (DORA Articles 31–44). Under the AI Act, the provider of the high-risk AI system reports serious incidents under Article 73; the deployer assists the provider and reports to its market-surveillance authority where deployer-side action triggers reporting. In a typical banking arrangement — bank as deployer, vendor as provider — the bank reports to its supervisor (DORA), the vendor reports to its market-surveillance authority (AI Act), and the two reports must be reconcilable.

How does AI Act Article 27 (FRIA) relate to DORA's risk-assessment requirements?

DORA's risk-management framework (Article 6) and risk-assessment processes (Articles 8–9) are about ICT operational resilience — confidentiality, integrity, availability of ICT systems and the data they process. AI Act Article 27 FRIA is about the system's impact on fundamental rights of natural persons — non-discrimination, dignity, due process, data protection. They overlap on personal-data handling but address different risk categories. A mature program runs FRIA inside a unified risk workflow with both lenses, but the underlying questions are different and both must be answered.

What is the deployment timeline reality for a 2026 banking AI program?

For a high-risk banking AI system going live in 2026–2027, expect: 3–6 months for joint AI Act / DORA documentation (risk-management system, FRIA, technical documentation, third-party register entries, ICT framework integration); 1–3 months for testing and accuracy/robustness evidence; ongoing for audit-trail integration (an architecture decision, not a phase); continuous from go-live for incident management, post-market monitoring, and resilience testing. Programs that bolt the AI Act work onto a near-complete DORA program tend to slip 2–4 months because Article 12 logging is rarely retrofittable cheaply.

Do small banks have an exemption?

DORA's proportionality clauses (Article 4) allow simpler arrangements for smaller entities, but exemption is narrow — microenterprises and certain small entities have lighter regimes, not opt-outs. The AI Act's SME provisions (Article 62) provide regulatory-sandbox priority access and reduced fees for conformity assessment but do not exempt SMEs from substantive obligations. A small bank with a high-risk credit-decisioning AI faces the same Article 9/12/14 obligations as a global one.


Related Reading