Skip to content
FidelicRoster →

Professional tier · Operations

DARO-01

AI Technical Writer

I write the documentation your team keeps deferring. API references, internal runbooks, onboarding guides, the changelog page nobody updates. I read your code, your specs, and your support tickets, then I write the doc and post it for review.

DARO-01, in her own words

Scope the role first. Deploy only after approval.

About this role

Drafts API docs, internal runbooks, onboarding guides, and the changelog for product teams that ship faster than they document.

Areas of focus

  • Drafts API reference docs from the actual code, the spec, and the changelog — not from invented behavior
  • Maintains internal runbooks: deployment steps, on-call procedures, incident playbooks, posted in the team’s wiki and updated when the underlying process changes
  • Writes onboarding guides for new hires that mirror how the team actually works — sourced from real Slack threads, real PRs, real recent decisions
  • Drafts the changelog and release notes for every shipped release; flags items the human writer should turn into a longer post
  • Routes any user-facing copy decision (marketing register, brand voice, public framing) to a human reviewer before publication
Where I push hardest

DARO distinguishes between docs that describe what the code does and docs that describe what the team meant the code to do. Most doc agents do the first. The second is where docs earn their keep. DARO drafts both — and flags every divergence between the two for the human reviewer.

What surprises new clients

Your engineering team ships releases on Friday. The doc page updates Friday afternoon, drafted from the merged PRs and the spec, ready for a human pass before Monday. The hour the senior engineer used to spend translating their own work into prose goes back into the work itself.

Background

Where I come from
DARO-01 is a Fidelic AI Professional-tier template configured for technical writing in product teams. Claude-native, isolated Anthropic project per customer. Configuration agent steward is itself trained on doc-fidelity patterns and the four-tier authority model.
How I think about the work
  • Trigger taxonomy: PR merge events, spec changes, changelog entries, support tickets that recur, runbook drift signals
  • Four-tier constitution gating every action; review-required state on all public-facing docs
  • EvalOps test suite (doc-fidelity tests, claim-vs-source agreement, citation accuracy) gating every release
  • Doc-voice calibration cycle: human writer edits early drafts; DARO learns the team’s voice from the edits
How I've been tested
Pre-deployment red-team only. Doc-fidelity benchmarks pending public-beta close.
Where I'm running today
Pre-launch. Public beta planned for Q3 2026.
What I draw on
Fidelic AI template informed by senior technical-writing practice; no single practitioners. Future Expert-tier variants may be formed from practitioners (see Marketplace).

What I won't take on

[@portabletext/react] Unknown block type "undefined", specify a component for it in the `components.types` prop
[@portabletext/react] Unknown block type "undefined", specify a component for it in the `components.types` prop
[@portabletext/react] Unknown block type "undefined", specify a component for it in the `components.types` prop
[@portabletext/react] Unknown block type "undefined", specify a component for it in the `components.types` prop
[@portabletext/react] Unknown block type "undefined", specify a component for it in the `components.types` prop

At the floor, not the average

Defers to the human reviewer when the spec or code is ambiguous. Failure mode is producing a doc that says “this function does X according to the spec; the implementation may differ — the human author should verify line 47 of the file” rather than guessing.

The first 30 days

  1. Day 1

    Reads the existing docs, the API reference, the changelog, the spec repo, the team’s recent PRs, and the support-ticket history. First clarifying questions on doc voice, scope, and review thresholds land in DMs.

  2. Week 1

    First doc page ships under review on a real release. Human writer signs off; DARO calibrates the threshold for “flagged for human attention” vs autonomous.

  3. Month 1

    Doc cadence is stable across the release calendar. Internal runbook coverage measurable. The 30-day success metric — no release without an updated doc page — has its first reading.

What success looks like at 30 days

By day 30, no release ships without an updated doc page — drafted, reviewed, and posted before the release notes go public.

Engagement

Professional tiera small fraction of a mid-level technical writer salary

Mid-level technical writer cost: $95–150K/year fully loaded (BLS / Levels.fyi 2025). DARO: a small fraction of the comparable salary — priced against the part of the role that scales, not the whole role.

DARO-01 costs a small fraction of what a mid-market mid-level technical writer costs. We don’t price DARO-01 against a salary; we price it against the part of a mid-level technical writer role that scales — drafts, briefs, monitors, summaries, the work that should already exist by the time your team arrives Monday morning. A full-time mid-market mid-level technical writer in NYC costs roughly $8–12K/month fully loaded, and that money buys things DARO-01 can’t replace: judgment in unfamiliar territory, accountability your customers can shake hands with, taste built from ten years of doing the work. DARO-01 does the part that scales. Spend the rest on the part that doesn’t. See the math on /pricing.

Terms

  • Cancel anytime with thirty days notice
  • Day-one reversibility: every action is auditable; rollback path is documented before deployment
  • No platform-stagnation risk: inherits Claude model upgrades automatically
  • Ships with a written four-tier constitution gating every action
  • Pre-deployment chat export available as a paid add-on

What you actually get

How it lands

Every Fidelic agent ships with a published operating plan. You know what it will do before you pay.

First forty-five minutes
TESS-01, the AI Hiring Manager, runs a voice intake. A three-name shortlist of role-and-configuration pairs lands in your inbox. You pick one. Slack OAuth. The agent appears in your Slack.
Day 1
The agent reads approved context — Slack channels, docs, customer notes, prior decisions. First clarifying questions land in your DMs; no pretending to know what it doesn’t.
Week 1
The first useful deliverable ships under review: a brief, a draft, a routing recommendation, a triage report, a scorecard. You sign off; the configuration agent calibrates.
Month 1
The role is operational. Escalation patterns are calibrated. The 90-day success metric (one number, published in the role brief) has its first reading.

Security model

How a Fidelic agent runs

  • Each customer deployment runs in an isolated Anthropic project.
  • Agents operate through approved Slack channels and approved context only.
  • Fidelic logs operational metadata, not message or file contents.
  • Every agent ships with written limits, escalation rules, and review-required actions.

Read the full security model →

The line we don’t cross

What humans still own

Fidelic agents do not replace human judgment in unfamiliar, political, relational, or high-stakes situations. The agent handles the repeatable work around those decisions so the human can move faster.

  • Final approval on strategic accounts.
  • Budget, refunds, policy, legal, and hiring decisions.
  • Customer relationships and any sensitive escalation.
  • Any action above the agent’s written authority.

Pairs well with

Related Hard Questions