Field Guide · triggers
The Trigger Catalog
Six categories of real-world event are doing most of the work behind every Fidelic agent. A chatbot waits for a message; an agent listens for one of these.
The question comes up in every demo, usually about eight minutes in, usually phrased the same way: so what does the agent actually listen for. The honest answer is that we have a working taxonomy of six categories, the catalog grows every month, and the moment a buyer hears it they stop asking whether this is a chatbot in a trench coat. The categories below are the ones every Roster agent draws from. They are not exhaustive, and they are not theoretical. They are the events the agents in production right now are watching for, the events the next batch of agents will be configured against, and the layer of the stack the rest of the architecture hangs on.
Why it matters
A roster of agents without a trigger catalog is a list of chatbots with cleverer names. The whole argument for autonomous AI labor — the reason you would hire one of these things instead of opening a chat window — is that the agent is doing work in the absence of a prompt. Strip out the triggers and the work disappears. So when a buyer asks where the labor comes from, the answer has to be specific: here are the events, here are the agents that listen for them, here is what they produce when an event fires. Without that specificity, every claim about agent labor is vibes.
There are six categories in the working catalog. Each is defined by where the event originates and what kind of signal it carries.
1. Time-based
The clock fires the trigger. A Monday-morning brief is due before the 9 a.m. partner meeting. A board-deck data refresh runs the last Friday of the month. End-of-quarter close kicks off a checklist seventy-two hours before books are due. OREN-01 listens for the morning-brief schedule. KALA-01 listens for the editorial calendar's publish dates. Time-based triggers are the easiest to build and the easiest to underrate — most of what an experienced operator does is anchored to a recurring date.
2. Threshold breaches
A metric crosses a value. NPS for a strategic account drops below 40. Renewal-at-risk dollars exceed a quarterly ceiling. P95 latency on the checkout endpoint spikes past 800ms during a deploy. KORA-01 watches account-health thresholds for renewals routing. VEXA-01 watches pipeline and funnel thresholds for B2B growth ops. The hard part is not the comparison; it is deciding which thresholds are worth a human's attention and which are noise.
3. Document-arrival
A document lands somewhere the agent can see it. A 10-K is posted to EDGAR. A vendor's MSA arrives as an email attachment. A job description is pasted into a Slack channel. ETHA-01 listens for security questionnaires and vendor docs landing in the review queue. VELA-01 listens for contracts and policies routed for compliance review. Document-arrival triggers are deceptively simple — the work is in the parsing, not the detection.
4. Inbound message intents
Someone sends a message and the message carries a recognizable intent. A customer asks about renewal terms. A prospect requests pricing. An employee files an HR question in a ticketing system. ZADO-01 listens for inbound questions in Slack and answers from published docs. VYRA-01 listens for inbound discovery-call requests. The classifier matters less than the intent ontology — what counts as the same kind of question — which is where most of the engineering goes.
5. Reference-data updates
Something the agent has been watching changes. A competitor's pricing page shifts a number. A regulator publishes a new rule. A vendor ships a changelog with a breaking API note. KALA-01 watches competitor content surfaces for the editorial calendar. SYRA-01 watches search-intent shifts and SERP changes. ARNA-01 watches the brand style guide for revisions that should propagate across surfaces. Reference-data triggers are where domain expertise lives — knowing which sources to watch, and at what cadence, is most of the value.
6. Human handoff signals
A person on the team flags something. A manager marks a deal as at-risk in the CRM. An ops lead tags a ticket for escalation. A reviewer rejects a draft with a reason. KORA-01 listens for at-risk flags from account managers. VEXA-01 listens for handoffs out of automated sequences and into human review. Handoff signals are the category most often missed in early agent designs, because they require the agent to be co-located with a team's existing workflow rather than parallel to it.
Most production agents listen across more than one category. KORA-01 takes time-based, threshold, and handoff triggers. KALA-01 takes time-based and reference-data triggers. The category is a useful abstraction for talking about what an agent does; the agent itself is a bundle.
The edge
The most useful triggers are the ones that don't look like triggers. A competitor's pricing page swaps a number on a Tuesday afternoon. No press release, no announcement, no email blast. Just a quiet diff on a public URL. KALA-01 catches it within the hour and routes a draft response brief — what the change implies, which content surfaces of yours go stale, what the editorial calendar should reshuffle to address it — into Slack before the marketing team's standup the next morning. The trigger is mundane. The work it sets in motion is not.
Honest take
A trigger catalog is easy to write and hard to maintain. Triggers without good outcomes still produce noise, which trains the team to ignore the agent. Sources change shape — a vendor moves their changelog, a regulator restructures a portal — and the listener silently breaks. The catalog is not a moat by itself; the moat is the discipline of pruning it, watching what fires, and cutting what doesn't earn its place in someone's morning.
So the next time a buyer asks, eight minutes into the demo, what the agent actually listens for, the answer is specific: one of six categories, configured against your sources, pruned by what earns its place. Then you point at the Roster.