Documentation

Agent Configuration

Configure AI coding agents to use NavyAI as their backend. NavyAI works as a drop-in backend for popular agents. Point integrations at https://api.navy/v1 for OpenAI-style clients, or https://api.navy for Anthropic-style clients, and use your NavyAI API key. Any model available on NavyAI can be selected.

Claude Code

In your user directory, open the .claude directory, then settings.json, and use:

JSON
1{
2  "env": {
3    "ANTHROPIC_AUTH_TOKEN": "sk-navy-YOURKEYHERE",
4    "ANTHROPIC_BASE_URL": "https://api.navy",
5    "ANTHROPIC_DEFAULT_HAIKU_MODEL": "claude-haiku-4.5",
6    "ANTHROPIC_DEFAULT_SONNET_MODEL": "claude-sonnet-4.6",
7    "ANTHROPIC_DEFAULT_OPUS_MODEL": "claude-opus-4.6",
8    "API_TIMEOUT_MS": "3000000"
9  },
10  "model": "sonnet[1m]"
11}

OpenAI Codex CLI

Create or edit ~/.codex/config.toml with:

TOML
1model_provider = "navyai"
2model = "gpt-5.2"
3
4[model_providers.navyai]
5name = "NavyAI via Chat Completions"
6base_url = "https://api.navy/v1"
7env_key = "NAVYAI_API_KEY"

Then set NAVYAI_API_KEY in your shell environment and run codex.

Roo Code

  1. Open Roo Code, click the gear icon, then go to Settings → API Configuration
  2. Set Provider Type to OpenAI Compatible
  3. Set Base URL to https://api.navy/v1
  4. Set API Key to your NavyAI key
  5. Pick any model (e.g. claude-sonnet-4.6)
  6. Click Save

OpenClaw

Create a .env file in your project root:

Env
1LLM_PROVIDER="openai"
2LLM_BASE_URL="https://api.navy/v1"
3LLM_API_KEY="sk-navy-YOURKEYHERE"
4LLM_MODEL="claude-sonnet-4.6"

Then run openclaw start.

Other agents

Works with any OpenAI- or Anthropic-compatible tool: set the base URL and API key as above and choose a model from the NavyAI catalog.

  • Use POST /v1/chat/completions if your agent already speaks OpenAI chat
  • Use POST /v1/messages if your agent expects Anthropic Messages
  • Use POST /v1/responses if you want a more unified OpenAI-style input and output shape
  • Enable stream: true for long-form generations and tool-heavy flows
  • Call GET /v1/models during startup or on a short cache window to build dynamic model pickers

Structured output

If your agent expects schemas, use response_format on chat completions or text.format on responses.

Docs Assistant
I’m here to help with NavyAI docs. Ask about endpoints, auth, models, request bodies, or integration details.