Heartbeat Patterns: Building Proactive AI Agents That Trigger Themselves (2026)
Heartbeat patterns are the architectural shift that separates a chatbot from an agent that actually does work. A chatbot waits for a prompt. A heartbeat agent triggers itself — on a schedule, on a signal, on a detected change — and produces work the operator did not have to ask for. The shift from request-response to self-triggering is what turns a passive AI feature into an active member of the team.
This post defines the three heartbeat patterns that show up in production agent systems, gives concrete code shapes for each, walks through how they compose into a daily and weekly cadence for a real vertical, and names the discipline that keeps self-triggering agents from becoming a runaway fleet. The audience is engineers building agent infrastructure and operators trying to evaluate whether their stack is genuinely proactive or just dressed-up reactive.
TL;DR
- Heartbeat patterns are the architectural primitives that let AI agents trigger themselves rather than wait for a human prompt. There are three: cron-based (time-driven), signal-based (event-driven), and change-detection-based (state-diff-driven).
- Cron is the simplest and the most overused. It works when the cadence is fixed and the work is the same every cycle. It fails when the work is event-shaped — that is when you need signals, not schedules.
- Signal-based heartbeats are the dominant pattern for proactive sales, ops, and support agents. A producer detects a webhook, an email, an inbound message, a system event — and the producer triggers the agent run with that signal as input.
- Change-detection heartbeats are the most subtle. The agent runs on a schedule, computes a hash of the relevant state, and only does work if the state has changed since the last run. This is the pattern for monitoring agents, drift detectors, and anything that should be cheap when nothing is happening.
- Three disciplines keep self-triggering fleets healthy: every triggered run carries enough context to be reproducible, every run produces an artifact even when the answer is "nothing to do," and every run is governed by the same metadata as a human-triggered one. Without these three, a heartbeat fleet quietly drifts into noise.
Why Self-Triggering Is the Real Shift
For most of 2023 and into 2024, "AI agents" meant something the user could prompt. The agent was a wrapper around a chat interface — better tools, longer context, more capable, but still a synchronous request-response loop. Every action started with a human typing.
The agents that actually move work are the ones that trigger themselves. The status-assessment agent that wakes up every weekday morning and reviews yesterday's pipeline before the operator gets to their desk. The intelligence agent that watches a list of competitors and pings the operator when one of them does something the operator has flagged as relevant. The audit agent that recomputes risk scores every six hours and surfaces anomalies. The triage agent that listens to an inbound stream and routes each message into the right downstream workflow without waiting to be asked.
None of those agents are doing more than the chatbot version could in principle. The difference is that they decide when to run. The operator's attention is the rarest resource in a real fleet, and a heartbeat agent is one that respects that constraint by acting before being asked.
For the wider architectural picture of how heartbeats fit into a multi-agent runtime, see How to build a multi-agent AI system and AI workforce architecture 2026.
Pattern 1: Cron-Based Heartbeats
The simplest pattern. The agent runs on a fixed schedule, with no input other than the current time. Every Monday at 08:00. Every weekday at 17:30. Every hour on the hour. The schedule is declared once, the runtime dispatches the agent at each tick, and the agent does its work.
A typical job declaration in a TypeScript-shaped registry might look like this:
type CronJob = {
id: string;
schedule: string; // cron expression: "0 8 * * 1" = Mondays at 08:00
agent: AgentReference;
promptTemplate: string;
promptVars: Record<string, string>;
governance: GovernanceMetadata;
};
const weeklyDigest: CronJob = {
id: "pipeline-weekly-digest",
schedule: "0 8 * * 1",
agent: "summary-agent",
promptTemplate: "weekly-pipeline-digest.md",
promptVars: { lookback_days: "7" },
governance: {
risk_level: "low",
data_categories: ["business_metrics"],
human_oversight_required: false,
approved_by: "operator@example.com",
approved_at: "2026-04-01",
},
};
The dispatcher reads the registry, evaluates each cron expression against the current time, and spawns the matching agents. Every dispatched run produces an artifact — a report, a digest, a structured update — that lands somewhere the operator can find.
When cron is the right answer
Cron earns its keep when three things are true: the cadence is fixed, the work is the same every cycle, and there is no benefit to running it more often. Daily morning briefings. Weekly health reports. Monthly compliance checks. Quarterly strategy reviews. Anything that maps onto a calendar boundary.
When cron is the wrong answer
Cron is the wrong answer for event-shaped work. If the agent should run when an inbound email arrives, when a deal moves to a new stage, when a competitor publishes a press release — cron is the wrong primitive. You will end up running the agent every minute "just in case," paying the cost of every run, and getting near-zero useful output 95% of the time. That is a signal you need a different pattern.
The other failure mode is using cron for work that is genuinely change-detection-shaped (see Pattern 3 below). A daily run that recomputes the same risk scores against unchanged data is wasted compute and wasted operator attention. The right shape is "run on schedule, but do work only if the state has changed."
Pattern 2: Signal-Based Heartbeats
Signal-based heartbeats are the dominant pattern for proactive agents in sales, operations, and support. The runtime listens for an event — a webhook, an inbound email, a database row insert, a message on a queue, a Slack message — and triggers the appropriate agent with that signal as input.
The architectural shape:
type SignalSource = "webhook" | "email" | "db_change" | "queue" | "slack";
type SignalJob = {
id: string;
source: SignalSource;
filter: (event: SignalEvent) => boolean;
agent: AgentReference;
promptTemplate: string;
governance: GovernanceMetadata;
};
const inboundLeadTriage: SignalJob = {
id: "inbound-lead-triage",
source: "webhook",
filter: (event) => event.endpoint === "/hooks/contact-form",
agent: "triage-agent",
promptTemplate: "triage-inbound-lead.md",
governance: {
risk_level: "medium",
data_categories: ["personal_data", "contact_info"],
human_oversight_required: true,
approved_by: "operator@example.com",
approved_at: "2026-04-01",
},
};
The runtime maintains the listeners. When an event matches a filter, the runtime spawns the agent with the event payload as part of the prompt context. The agent decides what to do — classify, enrich, draft a response, route to a human, escalate.
Why signal-based is usually right for "proactive" work
Most genuinely proactive AI work is event-shaped, not time-shaped. A new contact form submission, a deal moving to negotiation, a product issue being reported, a competitor mention surfacing in a monitoring feed — these are events the operator wants reacted to within minutes, not hours, and not on a daily cron tick. Signal-based heartbeats are the only pattern that scales to that latency.
The other reason to prefer signals over crons when both are technically possible: signals are honest about why the agent ran. The audit trail says "the agent ran because event X happened at time T." A cron run that processes whatever events happened to be in the queue at tick time has a less precise causal record. For governance-heavy use cases, signal-based runs are cleaner.
The discipline signals require
Signal handlers can fire at unbounded rates. If a system error produces 10,000 webhooks in a minute, a naive signal handler will spawn 10,000 agent runs and exhaust your model budget in the same minute. The discipline is in the runtime: rate limiting per source, deduplication of identical-payload events, batching of similar events into a single run, dead-letter handling for events that consistently fail. Without this, "proactive" turns into "runaway" within hours of the first incident.
For how this fits into the broader runtime, see the agentic operating system overview.
Pattern 3: Change-Detection Heartbeats
The most subtle pattern. The agent runs on a schedule (often a frequent one — every 5 minutes, every 15 minutes), but the first thing it does is check whether the relevant state has changed since the last run. If nothing has changed, it exits cheaply. If something has changed, it does the actual work.
The shape:
type ChangeDetectionJob = {
id: string;
schedule: string; // typically frequent: "*/15 * * * *"
computeStateHash: (ctx: AgentContext) => Promise<string>;
doWork: (ctx: AgentContext, prevHash: string, currHash: string) => Promise<Artifact>;
governance: GovernanceMetadata;
};
const competitorPageMonitor: ChangeDetectionJob = {
id: "competitor-pricing-monitor",
schedule: "*/30 * * * *",
computeStateHash: async (ctx) => {
const html = await ctx.tools.fetch("https://competitor.example.com/pricing");
return sha256(stripVolatileMarkup(html));
},
doWork: async (ctx, prevHash, currHash) => {
return ctx.runAgent("change-summary-agent", {
prevHash,
currHash,
page: "competitor-pricing",
});
},
governance: { /* ... */ },
};
The runtime stores the previous hash. On each tick, it computes the current hash, compares, and only invokes doWork when they differ. The vast majority of ticks cost only the hash computation — fast, cheap, and silent. The occasional tick that detects a change pays the full cost of the work, which is exactly when paying it is justified.
Why this pattern is undersupplied
Most teams default to cron for monitoring work and pay the cost of running the full agent every cycle. Change-detection is the discipline of separating "is there work to do?" from "do the work." When the answer to the first question is usually no, separating it pays for itself within days.
The pattern shows up cleanly in:
- Drift detection. Recompute model performance metrics; only act if they crossed a threshold.
- External page monitoring. Hash a target page; only run the summarization agent if the page changed.
- Database invariant checking. Compute a digest of the constraint state; only escalate if it differs from the last good state.
- Document change tracking. Hash a shared document; only re-summarize if it has been edited.
When change-detection is the wrong frame
Change-detection assumes you have a meaningful, computable hash for "the state I care about." If the relevant state is conversational, fuzzy, or distributed across systems with no canonical digest, you will spend more effort defining the hash than you save by skipping work. In those cases, signal-based heartbeats (the system that owns the state notifies you when it changes) are usually the right alternative.
Composing the Three Into a Daily and Weekly Cadence
A real proactive agent fleet runs all three patterns at once, composed into a daily and weekly cadence that the operator can rely on. The 4Sales vertical is a useful illustration of the composition.
Cron-based heartbeats establish the rhythm.
- Every weekday at 07:30, the morning briefing agent runs. It pulls yesterday's pipeline activity, the overnight signals, the flashcards waiting for review, and produces a single document the operator reads with their first coffee.
- Every Monday at 08:00, the weekly digest agent runs. It looks back across the previous seven days and identifies trends, outliers, and themes the daily briefings did not surface.
- Every Friday at 16:00, the audit agent runs. It samples the week's outputs, scores them on quality and risk dimensions, and surfaces anything that needs human review before the weekend.
Signal-based heartbeats handle the events.
- An inbound contact form submission triggers the lead-triage agent within seconds. The agent enriches, classifies, and routes the submission to the appropriate downstream queue.
- A deal-stage change in the CRM triggers the playbook agent, which proposes the next outreach action.
- A competitor mention in the monitoring feed triggers the intelligence agent, which decides whether the operator needs to be notified or whether the mention is noise.
Change-detection heartbeats keep the silent monitoring cheap.
- Every 30 minutes, the competitor-pricing monitor checks the hash of three competitor pricing pages. Most of the time, nothing happened. The few times something did, the change-summary agent fires and the operator gets a flashcard.
- Every 15 minutes, the pipeline-anomaly monitor recomputes a digest of "deals that have not moved in N days." If the digest changed, the anomaly-summary agent runs and reports new stalls.
- Every hour, the document-drift monitor checks whether shared playbook documents have been edited; if so, the playbook-republish agent regenerates the relevant agent prompts.
The daily briefing the operator reads in the morning is the aggregated output of all three patterns. The cron-based agents produced the structural sections. The signal-based agents produced the per-event entries. The change-detection agents produced the "what is different from yesterday" summary. The operator reads one document and gets the consolidated view of a fleet that ran without their attention.
The Three Disciplines That Keep a Heartbeat Fleet From Drifting
Self-triggering agents are easy to start and hard to keep healthy. Three disciplines, applied consistently, are what separates a heartbeat fleet from a runaway one.
1. Every triggered run carries enough context to be reproducible. When the audit agent finds something odd in last Tuesday's output, the operator should be able to look at the run record and see exactly what triggered it, what the input state was, what tools it called, and what it produced. Without this, you are debugging by intuition. With it, every weird output is reproducible — and reproducible weirdness becomes fixable weirdness.
2. Every run produces an artifact even when the answer is "nothing to do." A change-detection agent that only logs "nothing changed" 95% of the time still produces a record. The 5% of runs that produced real work are visible against the 95% that did not. Without this baseline, you cannot tell whether the agent is healthy, whether it is silently failing, or whether the world has stopped changing.
3. Every run is governed by the same metadata as a human-triggered one. Risk level, data categories, human-oversight requirement, approver. A self-triggered run is not exempt from the audit trail. A heartbeat fleet that runs governance-free is the fastest way to produce a compliance incident, because the volume of self-triggered runs dwarfs the volume of human-triggered ones. The governance discipline has to apply at the runtime layer, automatically, on every tick — not as something an agent author opts into.
For the wider architectural framing of governance metadata as a runtime primitive, see the agentic operating system glossary entry and the broader business piece on agentic operating systems. For the framework comparison that names which agent libraries support these patterns natively, see Top agentic AI frameworks compared 2026. For the underlying definition of what makes an agent agentic in the first place, see the agentic AI glossary entry.
Frequently Asked Questions
What is a heartbeat pattern in AI agent design?
A heartbeat pattern is an architectural primitive that lets an AI agent trigger itself rather than wait for a human prompt. The three primary patterns are cron-based (run on a fixed schedule), signal-based (run when an event arrives), and change-detection-based (run on a schedule but only do work when the state has changed). A proactive agent fleet typically uses all three.
How is signal-based heartbeating different from a webhook handler?
Architecturally, similar. Conceptually, broader. A webhook handler is one specific source of signals (HTTP events from external systems). Signal-based heartbeating includes webhooks but also covers email arrivals, database row changes, queue messages, Slack messages, and any other event source the runtime can listen on. The discipline is at the runtime layer — uniform filtering, rate limiting, deduplication, governance — rather than per-handler ad hoc code.
When should I prefer cron over change-detection?
When the work itself is the same every cycle and there is no meaningful "did anything change" check that would skip cycles. A weekly digest that summarizes the past seven days runs whether or not the data is exciting — that is cron. A monitor that watches a page and reports diffs is wasteful as cron and natural as change-detection. The test: can you write a cheap state hash that captures "is there anything to do?" If yes, use change-detection. If no, use cron.
What stops a signal-based agent from running away?
Rate limiting per source, deduplication of identical-payload events, batching of similar events into single runs, and dead-letter handling for events that fail repeatedly. These belong in the runtime, not in each agent's code. A heartbeat fleet without runtime-level rate limiting will take its first webhook storm and burn its monthly model budget in an afternoon.
How does heartbeat tooling fit with multi-agent orchestration?
Heartbeat patterns are the primitives that decide when an agent (or a multi-agent process) starts. Multi-agent orchestration is the pattern by which several agents coordinate inside one process once it has started. The two compose cleanly: a heartbeat triggers an orchestrated multi-agent flow, the flow does its work, the artifacts land in the system, the next heartbeat fires when the next condition is met.
How do I audit self-triggered agent runs for the EU AI Act?
The same way you audit human-triggered ones. Every run — regardless of trigger — should carry the governance metadata declared on the underlying job: risk level, data categories, human-oversight requirement, approver, approval timestamp. The audit trail records what triggered the run (cron tick, signal payload, change detection event), what the agent did, and what it produced. Self-triggered runs are not a special audit case — they are the same case at higher volume.
Heartbeat patterns are the architectural shift that turns "AI feature" into "AI worker." A fleet that runs only when prompted is a tool the operator picks up. A fleet that triggers itself, governed by the same audit trail and producing artifacts the operator can rely on, is a colleague — one that does the obvious work without being asked, and surfaces the non-obvious work the operator did not know to look for.