Field Guide · framework
Two kinds of AI teammate
"Software-as-a-worker" is becoming the category. But where the worker works — in your customer's WhatsApp or in your team's Slack — changes what kind of worker they are.
The word *teammate* is doing more work in AI marketing in 2026 than it did at any prior point in the field. Sierra, Decagon, Teammates.ai, Crescendo, Lindy, and every comparable vendor have settled on some variant of it. The Teammates.ai vision page reads: *"AI Teammates that work alongside humans."* Sierra's homepage opens with *"AI agents that talk and act like the people on your team."* The vocabulary is now category-shared. That's fine. It is, on its face, a reasonable name for what these systems are trying to be.
Naming a thing doesn't decide where it lives. Two AI systems that both call themselves *teammates* can be doing structurally different jobs in structurally different rooms — and the choice of room is the load-bearing fact, not the choice of word. There is a teammate that works in your customer's WhatsApp, owning customer-service tickets end to end. There is a teammate that works in your own team's Slack, drafting briefs and watching pipelines in front of the people who will use the work. These are two different categories of teammate, doing two different categories of work, with two different correct deployment models. The mistake the field is making in May 2026 is treating them as the same product.
This essay sorts them out, names the second kind precisely, and explains why where the agent works dictates how it should work.
Why it matters
The error is a buyer error before it is a vendor error. An SMB owner reads a marketing page, hears *AI teammate*, mentally maps it onto the SaaS pattern they know — software many people in the company can fire off in parallel — and signs up. Three months later the agent has overwritten a Notion doc twice because two different operators DM'd it incompatible instructions, the operations lead can't tell whether yesterday's draft survived the morning's revision, and the company is back to using the agent like a personal assistant for one specific person — which is the only deployment shape where the parallel-DM pattern doesn't break.
This is not theoretical. It is the most common cancellation story we see in the May 2026 buyer voice on Reddit's r/AI_Agents and r/smallbusiness: "Our AI agent kept doing the wrong thing because three of us were giving it different jobs at once." The agent isn't broken. The deployment model is wrong for the category. Specifying the model — which room, which traffic, which writes — is the work this essay does.
The first kind: the customer-facing teammate
Sierra, Decagon, Teammates.ai's Raya, and the AI receptionist category live here. The agent works in the *customer's* channel — WhatsApp, phone, email, web chat. The customer is the one in conversation with the agent. The team's job is to set the agent up, monitor escalations, and review edge cases.
This category benefits from real autonomy. The agent has to handle a long tail of customer questions without a human watching every keystroke; if every reply needed approval, the model wouldn't pencil out economically against a human support rep. The teammate-as-customer-service-operator is structurally autonomous because the alternative is a human reading every message in real time, which is exactly the cost being replaced.
For this category, parallel instances are not a defect, they're a feature. Twelve customers ask Raya twelve different questions on twelve different channels at the same time; Raya answers all twelve. The agent doesn't get confused because each conversation is a separate session with a separate customer. There is no shared state to clobber. The deployment model is rightly *one agent, many customer conversations, autonomous resolution.*
When this is the right call: high-volume customer-facing work where the cost of an unanswered message is high and the cost of a slightly-imperfect AI answer is acceptable. Call centers. Ticket queues. After-hours support. Outbound BDR (an adjacent shape — Adam at Teammates.ai, 11x's Alice). Fidelic does not compete in this category. Hire one of those vendors, not us, for that work. The /alternatives pages name the recommended competitor for each variant.
The second kind: the internal teammate
Fidelic's roster lives here. KORA-01, VYRA-01, VEXA-01, OREN-01, and every other agent on the public Roster works in the *team's own* Slack or Microsoft Teams channel. The customer experiences the work product (a brief that informed a renewal call, a follow-up email a rep sent, a competitor-watch alert that shaped a positioning meeting), but the customer never talks to the agent. The team does.
This category is structurally different from the first in three load-bearing ways, each of which forces a different deployment model.
**1. The traffic is internal, not external.** Internal traffic has a small number of people contributing instructions (the team) but high coordination friction between them (everyone's editing the same shared state — a draft, a brief, a database record, a CRM entry). External traffic has many independent customers contributing instructions but zero coordination friction between them (no two customers care about the same ticket). Internal coordination is the harder problem.
**2. The work writes shared state.** A customer-facing agent's "writes" are mostly the agent's own messages to the customer — they don't conflict with anyone else's writes. An internal agent's writes go into the team's shared systems: drafts in Notion, fields in Salesforce, records in Linear, posts in Slack itself. Three teammates editing the same Notion doc at the same time is a classic git-style merge problem. AI agents don't have a merge protocol. They overwrite.
**3. The team needs to see the work happening.** Trust in an internal teammate is built the way trust is built in a human teammate — by watching them work. A customer-service agent can be a black box because the customer is the audience; an internal agent can't be a black box because the team is the audience. If the team can't see what the agent is thinking, drafting, deciding, the team can't redirect, hand off, or interrupt. That's not a UX nicety. It's the substrate of teammate-style trust.
These three forces dictate a different deployment shape. The internal teammate has to be **in one channel, posting its work transparently, with the team able to watch and intervene.** The customer-facing teammate's deployment shape — autonomous, parallel, multi-channel — would be actively counterproductive for the internal one. The git-branch crossfire would happen on every meaningful operation.
The git-branch frame
The crossfire problem is best understood through the developer model. Two developers pushing commits to the same branch without coordinating produces merge conflicts; modern dev tooling treats this as an unsolved problem and forces *pull request, merge, rebase* — a protocol that makes the conflict visible and resolvable.
An AI agent has no such protocol. If three people on the team DM the same agent to update the same shared brief, the agent writes the latest instruction it received. The earlier instructions are silently lost. There is no merge UI, no rebase, no human reviewer in the loop. The agent doesn't have the architectural construct of "competing intentions"; it has one prompt at a time and one piece of state at a time.
The fix isn't a more sophisticated merge protocol inside the agent. The fix is to move the coordination out of the agent and into the room where the team already coordinates — Slack or Microsoft Teams. If all the work is happening in one channel, three people can't independently ask for the same write because they can see each other asking. The channel is the merge protocol.
The Fidelic two-mode model
This is the operational practice that makes the internal-teammate philosophy real. It has two modes, each defined by which room the agent is in.
**Channel mode (work mode).** The agent lives in a channel the team invites it to. Every write to shared state happens here, in a thread the team can see. The agent posts its reasoning, posts the draft, posts the diff. A human can interrupt, redirect, hand off, take over. This is where briefs get drafted, accounts get watched, escalations get routed. The channel is the coordination layer; the thread is the unit of work.
**DM mode (think mode).** Any team member can DM the agent. In a DM, the agent answers questions, brainstorms, explains reasoning, surfaces its point of view, pulls up read-only context from Sanity or the company's published documents. *No tool calls that write state.* The agent doesn't draft a Notion doc in DM. It doesn't edit a CRM record. It doesn't send an email. The DM is a place for the agent's intellect and canon-text knowledge to show, not for its hands to act. If a team member asks for a task in DM, the agent points them to the channel.
This split is non-trivial. It costs us the ability to let any team member fan out asks in parallel; that is a real loss of optionality that some buyers will miss. We accept the loss because the alternative — letting the agent write to shared state from anywhere — produces the crossfire we just diagrammed.
The split also creates a useful side effect: DMs become the place where the agent's POV gets sharpest. With no tool-calling pressure, the agent's reasoning is what the team experiences. A KAEL-01 conversation in a DM about how to think about a Q2 plan reads more like a teammate's perspective than a SaaS tool's output, because that's structurally what it is.
Good parallelism and bad parallelism
The two-mode model can read as anti-parallel if you stop reading after the DM-restriction. It is not. Parallelism is the source of most of the leverage in this category, and the architecture is designed to capture it — through a specific kind of parallelism.
Bad parallelism: many concurrent instances of the same agent, each accepting independent instructions from different people, all writing to the same shared state. This is the pattern teammates.ai's marketing reaches for when it pitches autonomous agents handling unlimited concurrent customer conversations — which is fine for customer-facing work where the conversations don't share state, and exactly wrong for internal work where they do. Multiplying instances of the same agent into the same team's Notion or Salesforce produces the crossfire we just diagrammed.
Good parallelism: many different agents, each with their own role, each owning a different slice of the team's work, all operating in the same channel. KORA-01 watches account health and routes renewal risk. DRYN-01 runs the recurring analytics loop. NYRA-01 handles the recruiting coordination. KALA-01 produces the editorial calendar. Four agents, four roles, one #operations channel. They don't crossfire because they don't share write state — KORA writes to Salesforce, DRYN writes to a metrics doc, NYRA writes to Greenhouse, KALA writes to Notion. The channel is where the team sees all four working at once; the threads are where each agent's own work lives; the writes go to four different systems.
This is the configuration our deck competitive matrix names "Many agents, one team" — the fifth column. Fidelic ships strong on it. Teammates.ai ships partial (one autonomous agent at a time per customer conversation, but no native multi-agent-in-one-channel pattern for an internal team). Lindy and 11x are weak — they're single-agent platforms with no multi-agent team model. The architectural commitment to many-agents-one-channel is what makes the Fidelic roster usable; the alternative (every agent in its own siloed deployment) is what the rest of the field actually ships.
The team-as-team pattern compounds in another direction worth naming. A team of agents in one channel can hand work off to each other — KORA flags an at-risk renewal, DRYN pulls the usage data, KALA drafts the save email, and the human approves the final move. Each agent does the slice they're formed for; the channel is the orchestration. This is closer to how a real team works than the autonomous-single-agent model is. The buyer doesn't manage one omniscient AI; they manage a roster of specialists, the same way they'd manage a department of humans.
Honest take
We are not claiming Fidelic teammates are autonomous in the Teammates.ai sense. They are explicitly *not.* The Roster's published authority model — autonomous, review-required, escalate, refuse — is granular per agent per task, and the default for any state-writing operation is review-required. An autonomous customer-facing teammate makes a different trade-off; that trade-off is right for that shape of work. For internal work that writes shared state, we trade autonomy for visibility, and we think that's the right call.
We are not claiming the channel-as-coordination-layer model solves every state-conflict problem. Two channels watching the same Notion page is still a crossfire risk; two teams using the same agent for adjacent work can still produce contradictory drafts. We *constrain* the crossfire by making the channel the canonical coordination point. We don't eliminate it. Every agent's published limit block names this honestly.
We are not claiming our model is right for customer-facing work. It isn't. If you need an agent to talk to customers in their WhatsApp, hire Sierra or Decagon or Teammates.ai. Their model is structurally correct for that. We don't compete there and we say so on the /alternatives pages.
Where the teammate works changes what kind of teammate they are. The customer-facing teammate works in the customer's channel and benefits from autonomy and parallelism. The internal teammate works in the team's channel and benefits from transparency and single-channel coordination. Calling both of them *AI teammates* is fine — the word is general enough to cover both — but the deployment model is not transferable. Buying an internal-teammate platform and deploying it on the customer-facing model produces a chatbot. Buying a customer-facing platform and deploying it on the internal model produces a confused, expensive personal assistant.
Fidelic is the second kind. The agent is in your team's Slack or Microsoft Teams. Channels are where the work gets done; DMs are where the thinking gets done. The git-branch crossfire problem is solved by moving the coordination out of the agent and into the room the team is already in. That is the load-bearing architectural commitment, and it is what *teammate* means when we use the word.