Field Guide · hiring
AI for Customer Service: When It Works and When It Embarrasses You
Customer service is the easiest place to get an AI agent right and the easiest place to get it wrong. The difference is whether the agent is in the team's Slack instead of in the customer's chat, and whether the limit list got read before the install.
Customer service is the most-asked question of an AI buying decision because it is the largest cost center most teams have, the easiest to measure, and the most public when it goes wrong. The AI vendors have noticed. There are now thousands of AI customer service products, from chatbots-on-the-website to autonomous agents that close tickets without a human reading them. The market is loud. The buying decision is not. Most teams arrive with two categories of fear: that the agent will say something embarrassing on a customer-facing surface, and that the agent will hide behind a wall the customer cannot escape from. Both fears are right. This piece is about the way to get an AI customer service agent right that handles both.
Why it matters
Customer service is the role where buyers most often confuse the chatbot pattern with the agent pattern. A chatbot answers customers directly, in the customer's chat, and the company watches metrics. An agent works the support team's stream — the tickets that arrived overnight, the patterns across them, the drafts the team can review and send — and posts in the team's Slack, where the team reads its work before the customer ever sees it. The chatbot pattern fails publicly. The agent pattern fails privately, where it can be caught.
There are three patterns of AI in customer service, and they are not interchangeable. Buyers who pick the wrong one for their team usually pick it for cost reasons and then regret it for trust reasons.
1. Chatbot in the customer chat.
This is the original pattern — a chat widget on the website that the customer talks to first, before reaching a human. Done well, it answers the easy questions and routes the hard ones. Done poorly, it traps the customer in a loop and damages the relationship the customer service team has spent years building. The pattern's failures are public: the screenshots end up on social media within hours.
This pattern is the right call when the volume of easy questions is so high that the team would otherwise be overwhelmed, and when the team has the budget to invest in scripting the cases the chatbot must escalate. It is the wrong call when the team is small and the customer expects a human voice on first contact.
2. Agent in the support team's Slack.
The Roster's customer-service entry — KORA-01 — runs this pattern. The agent reads the ticketing system. The agent watches for patterns across tickets. The agent drafts responses that the team reviews in Slack before they go out. The agent does not talk to customers directly. The customer sees a human reply. The team sees the agent's draft, the reasoning, and the limit the agent did not cross.
The pattern works because every failure is caught before it reaches the customer. The agent's mistake is a draft a teammate didn't send, not a screenshot a customer posted. The team's relationship with customers does not change. The team's daily workload changes. The CS lead reads ten drafts at the start of the shift instead of writing them; the team ships more replies, with the same voice, in less time.
3. Agent that closes tickets autonomously.
Some vendors will sell an agent that closes tickets without a human in the loop, on the theory that the model is good enough to handle the routine cases unsupervised. This pattern works for some narrow shapes of support — password resets, shipping queries with a clean answer — and breaks badly the moment the case is unusual. Buyers considering this pattern should ask the vendor for the limit list and the escalation path before signing. If the limit list is short, the pattern is not yet ready for production. If the limit list is long, the pattern is closer to pattern two with a cosmetic difference in the chat surface.
Which pattern is right for your team.
If the team has high ticket volume and a public-facing chat the company is willing to bet on, pattern one with a careful escalation script. If the team is doing high-judgment customer support — startup, SaaS, B2B — and the customer expects a human voice, pattern two, almost always. Pattern three is the marketing pattern and the one most likely to embarrass you. The number of teams it actually serves well is smaller than the number of teams it gets sold to.
The edge
The reason pattern two is the right pattern for most teams is the same reason agents work in general: the failure mode is private. A bad draft is in Slack, where the lead catches it. A bad chatbot reply is on social media, where the customer posts it. The cost of a private failure is a fixed pattern in the constitution; the cost of a public failure is a relationship. When the buyer is asked which they would prefer to manage, the answer is unambiguous. The pattern that gets sold is unambiguous in the other direction. That gap is where most AI customer service deployments go wrong.
Honest take
There is a real argument for pattern one in companies whose customer base genuinely prefers chat-first interaction and whose top question category is the same dozen FAQs. Some e-commerce companies are exactly this. Some D2C brands are exactly this. For them, a well-built chatbot with a generous escalation path is a better experience than a slow human reply. The honest version of the AI customer service argument acknowledges that and moves on. What it does not acknowledge is that most B2B and SaaS teams pretending to fit this pattern do not, and that the buyer who picks the wrong pattern for the team's actual customer relationship spends the next year apologizing.
Get AI customer service right by putting the agent in your team's Slack, not in the customer's chat. The customer's relationship is with your team. The agent's relationship is with your team's draft folder. Keep those two clear and the rest sorts itself out.