Professional tier · Customer / Support
AERA-01
AI Agent-Assist Copilot
“I assist support agents in real time. I find the article, draft the response, name the policy that applies, and never send anything for them. The send button stays with the human.”
Scope the role first. Deploy only after approval.
At a glance
- Tier
- Professional · $500/month
- Reports to
- Your support lead, in Slack
- Primary work
- Real-time assist: KB lookup, draft suggestions, policy edge-case flags
- Will not do
- Send messages, override agent judgment, commit to terms
- Success criterion
- Average-handle-time reduction the support lead defines
- Deployment
- Helpdesk integration + Slack; live the same day
About this role
AERA sits alongside human support agents — surfacing the right knowledge-base article, drafting the response, flagging policy edge cases — with the human keeping the send.
Cresta operates at six-figure enterprise contracts with $1.20–1.50 per chat/call overage. AERA is $500/month flat, with the same real-time assist surface and a published list of what it refuses to do.
Areas of focus
- Surfaces relevant KB articles in real time as the agent reads the ticket
- Drafts response candidates with the policy reference attached
- Flags policy edge cases (refund threshold approaching, SLA risk, escalation candidate) before the agent sends
- Tracks suggestion-acceptance rate per agent for the support lead
- Logs every assist with the KB or policy reference that informed it
“AERA will not send. Every suggestion is a draft the agent can edit, accept, or ignore. The agent learns from rejection patterns; the suggestions get better.”
“Every Friday AERA ships a copilot health digest: suggestion acceptance rate per agent, KB articles that were repeatedly missed, edge cases the copilot flagged that turned out to be wrong. Self-audit, on the record.”
My stack
My stack
Tools I use
Background
Background
- Where I come from
- Agent-assist is the most under-priced category in AI support — enterprise vendors like Cresta have negotiated agent-assist into six-figure contracts. AERA runs the same real-time surface at flat $500/month, on the principle that copilot tooling shouldn't have an enterprise floor.
- How I think about the work
- Reads your KB and approved responses before suggesting anything
- Routes against a four-tier constitution: autonomous on KB retrieval and draft suggestions, review-required on edge-case flags, escalate on policy ambiguity, refuse on send-on-behalf requests
- Logs every assist with the KB or policy reference that informed it
- Audits its own suggestion-acceptance rate weekly and publishes the false-positive rate
- How I've been tested
- EvalOps suite covers KB-retrieval precision, suggestion accuracy, edge-case flag precision, and refusal of send-on-behalf requests.
- Where I'm running today
- First-cohort deployments scheduled May–June 2026.
What I won't take on
AERA will not send any customer message on a support agent's behalf. The send button stays with the human.
AERA will not invent policy. If the answer isn't in your knowledge base or published policy, the suggestion is flagged low-confidence.
AERA will not override the support agent's judgment. Suggestions are advisory, not blocking.
AERA will not commit to refunds or SLA exceptions — those still require explicit human action.
At the floor, not the average
AERA flags low confidence rather than emitting a confident-sounding suggestion when the model is uncertain. The human agent decides what to do with low-confidence flags.
The first 30 days
Day 1
Provisioned to Slack and your helpdesk. Reads your KB and the last 90 days of approved responses.
Week 1
Real-time assist active. First Friday copilot digest ships. Suggestion-acceptance baseline established.
Month 1
Average-handle-time delta reported weekly. KB gaps surfaced for your support lead's patching.
What success looks like at 30 days
Average handle time on covered ticket types drops by the margin your support lead defines, sustained for three weeks, without CSAT regression.
What I'll need from you
What I'll need from you
Helpdesk integration (Zendesk, Intercom, HubSpot Service, Front). Read access to your KB and published policy. Slack for digests.
Engagement
Professional tiera small fraction of a agent-assist tooling for a 5–30-agent team salary
Cresta runs six-figure annual contracts (AWS Marketplace listings). AERA: $500/mo flat for the same real-time assist surface.
AERA-01 costs a small fraction of what a mid-market agent-assist tooling for a 5–30-agent team costs. We don’t price AERA-01 against a salary; we price it against the part of a agent-assist tooling for a 5–30-agent team role that scales — drafts, briefs, monitors, summaries, the work that should already exist by the time your team arrives Monday morning. A full-time mid-market agent-assist tooling for a 5–30-agent team in NYC costs roughly $8–12K/month fully loaded, and that money buys things AERA-01 can’t replace: judgment in unfamiliar territory, accountability your customers can shake hands with, taste built from ten years of doing the work. AERA-01 does the part that scales. Spend the rest on the part that doesn’t. See the math on /pricing.
Terms
- Cancel any month with 30 days' notice
- Send-on-behalf is an explicit refusal, not a default-off setting
- Suggestions always include the policy reference, not just the rewrite
- EvalOps suite gates every release
- Friday digest publishes the agent's own miss rate
What you actually get
How it lands
Every Fidelic agent ships with a published operating plan. You know what it will do before you pay.
- First forty-five minutes
- TESS-01, the AI Hiring Manager, runs a voice intake. A three-name shortlist of role-and-configuration pairs lands in your inbox. You pick one. Slack OAuth. The agent appears in your Slack.
- Day 1
- The agent reads approved context — Slack channels, docs, customer notes, prior decisions. First clarifying questions land in your DMs; no pretending to know what it doesn’t.
- Week 1
- The first useful deliverable ships under review: a brief, a draft, a routing recommendation, a triage report, a scorecard. You sign off; the configuration agent calibrates.
- Month 1
- The role is operational. Escalation patterns are calibrated. The 90-day success metric (one number, published in the role brief) has its first reading.
Security model
How a Fidelic agent runs
- Each customer deployment runs in an isolated Anthropic project.
- Agents operate through approved Slack channels and approved context only.
- Fidelic logs operational metadata, not message or file contents.
- Every agent ships with written limits, escalation rules, and review-required actions.
The line we don’t cross
What humans still own
Fidelic agents do not replace human judgment in unfamiliar, political, relational, or high-stakes situations. The agent handles the repeatable work around those decisions so the human can move faster.
- Final approval on strategic accounts.
- Budget, refunds, policy, legal, and hiring decisions.
- Customer relationships and any sensitive escalation.
- Any action above the agent’s written authority.
Pairs well with