Skip to content

AI Providers

Sadie routes every LLM call through a tier-aware resolver. Chat, wiki compilation, brief generation, and policy judging each have a default tier, and each tier resolves to a concrete provider and model through the same precedence chain.

The chain is designed so that a user with their own API key always wins, a user preference without a key falls back to the deployment’s env, and the local stub only ever runs in development.

When Sadie needs a model for a task, getProviderForTask(taskClass, userPrefs) walks this list and stops at the first hit:

  1. User-supplied API key. If the user has set their preferred provider in Settings and saved their own key for it, that key is used. One call per user, cached by key suffix.
  2. User’s preferred provider with the deployment env key. If the user chose Anthropic or OpenAI in Settings but did not bring a key, Sadie uses the deployment’s ANTHROPIC_API_KEY or OPENAI_API_KEY for that provider.
  3. Tier env vars. If the task’s tier has AI_FRONTIER_*_PROVIDER and AI_FRONTIER_*_MODEL set, that provider is instantiated with the matching env key. This lets a deployment route tier 1 calls (wiki_lint, source_extraction, preference_normalization) to a cheap model while tier 2 calls (brief_generation, grounded_chat) use a workhorse.
  4. Raw ANTHROPIC_API_KEY or OPENAI_API_KEY. If no tier vars are set but a raw key exists, Sadie uses the Anthropic key first (canonical), then OpenAI, with a sensible default model per tier.
  5. Local stub (dev only). If no real credentials are configured and NODE_ENV !== "production", the local stub serves deterministic Sadie-voice templates. Chat explicitly refuses the stub and returns a MissingLlmProviderError so self-hosters cannot ship without real credentials.

Each task class maps to a tier. The defaults live in packages/ai/src/model-router.ts:

  • Tier 1 (cheap frontier, high-volume maintenance): source_extraction, candidate_page_matching, wiki_patch_draft, contradiction_detection, today_card_features, preference_normalization, wiki_lint
  • Tier 2 (synthesis, normal interactive): wiki_page_create, today_card_copy, brief_generation, grounded_chat, studio_rewrite
  • Tier 3 (deliberate reasoning): contradiction_resolution, deep_synthesis_chat

For a self-hosted deployment, the minimum configuration is one raw vendor key:

Terminal window
ANTHROPIC_API_KEY=sk-ant-...
# or
OPENAI_API_KEY=sk-...

For tier routing, set both provider and model per tier:

Terminal window
AI_FRONTIER_SMALL_PROVIDER=anthropic
AI_FRONTIER_SMALL_MODEL=claude-haiku-4-5-20251001
AI_FRONTIER_WORKHORSE_PROVIDER=anthropic
AI_FRONTIER_WORKHORSE_MODEL=claude-sonnet-4-6
AI_FRONTIER_REASONING_PROVIDER=anthropic
AI_FRONTIER_REASONING_MODEL=claude-opus-4-7

To forbid the local stub even in development, set SADIE_ALLOW_LOCAL_AI_STUB=0. This makes calls fail loudly instead of silently returning template text.

  • Personas - how provider choice interacts with voice
  • Policies - semantic policies route through the wiki_lint tier