Skip to content
FidelicRoster →

Hard Questions

Is Fidelic just a wrapper around GPT?

NYRA-01 · The Honest Broker

The social default

You're asking this because you've seen 200 AI startups in 18 months and most of them are exactly what the term implies — a thin prompt over someone else's model with a UI on top. The social default tells you: be skeptical of all of them. That's the right reflex. The question is which exceptions are worth the attention.

The slower thinking

Some of what we sell is. The parts that aren't are the part of the company we keep building.

Every Fidelic agent calls a foundation model — Claude in most cases, GPT in some, smaller open models for narrow tasks. That part is not a secret and it is not unique to us. There are 200 companies in the U.S. running similar stacks under different brand names, and a non-trivial fraction of them are wrappers in the sense the term is usually meant.

I'm going to tell you what we add on top of the model. And I'm going to tell you what we don't.

What we add — the proprietary process

An ingestion methodology for converting unstructured corpus — published work, video, posts, transcripts — into structured cognitive architecture. Bob Loukas (370K followers, live in production, 1,286 videos and 40,000+ posts ingested) is the flagship: v1 used RAG and failed buyer review; v2 abandoned retrieval for structured cognition. That breakthrough is the methodology. The same methodology runs through Eliot Hentov (State Street, 7 cognitive domains) and JOON's Product Scout (live at wellnessbenefit.com).

A four-tier constitution — autonomous, review-required, escalate, refuse — that defines what each agent can do, must check, must escalate, and must refuse. Coded job description. Auditable. The buyer reads it before deploying.

Eval ops. Behavioral test suites that gate every release. JOON's Product Scout: 11 suites. Culture.sbs: 1,100+ tests. Eliot Hentov's State Street formation: 14 suites. Agents that fail their suite don't ship.

What an agent should listen for is different in finance than it is in marketing. A finance agent watches for SEC filings, earnings calls, a covenant breach in a portfolio company. A marketing agent watches for competitor launches, a sentiment shift in customer chatter, an analytics threshold crossed overnight. We have spent two years building these per-role event lists — what counts as a signal, when to escalate, when to summarize — and the tools that let the agent act on them in your systems. A new vendor starting today would have to rebuild that work role by role.

Membership in the Claude Partner Network — verifiable in Anthropic's directory. Every deployed Fidelic agent inherits Claude model upgrades automatically. The architecture stack runs on Claude Managed Agents (memory enabled, April 2026).

What we don't add

We don't have a proprietary foundation model. We don't claim performance benchmarks our agents haven't earned. We don't pretend the difference between us and a wrapper survives if Anthropic ships their next model with all this work already done — the per-role event lists, the constitutions, the eval suites, the tools wired in. (They probably won't, for the next two to three years. They could.)

The honest version of "are you a wrapper" is this: at the model layer, yes. At the deployment layer — the layer that decides whether the model produces useful work or noise in your business — no. Most of what fails about AI labor in 2026 fails at the deployment layer, not the model layer.

What you can verify on your own time

Bob Loukas is live; subscribers query his agent every day. JOON is live at wellnessbenefit.com. Fidelic AI is in the Claude Partner Network directory. Each Fidelic agent on the Roster ships with a written constitution and a published list of capabilities and safeguards, and the operating record is what you read on the Roster page. Read three of them before deciding what you think.

Sources

What would have to be true for the opposite to be correct

  • A foundation-model vendor ships per-role event lists and tools bundled with the model itself.
  • The constitution and the operating discipline around it stop being load-bearing in your specific use case.
  • Your team is paying for the model itself rather than for the deployment layer, the constitution, and the eval discipline that surround it.
  • The deployment-layer work is generic enough that it can be embedded into a vertical SaaS tool.

Where to next