Agent Configuration
Configure AI coding agents to use NavyAI as their backend. NavyAI works as a drop-in backend for popular agents. Point integrations at https://api.navy/v1 for OpenAI-style clients, or https://api.navy for Anthropic-style clients, and use your NavyAI API key. Any model available on NavyAI can be selected.
Claude Code
In your user directory, open the .claude directory, then settings.json, and use:
1{
2 "env": {
3 "ANTHROPIC_AUTH_TOKEN": "sk-navy-YOURKEYHERE",
4 "ANTHROPIC_BASE_URL": "https://api.navy",
5 "ANTHROPIC_DEFAULT_HAIKU_MODEL": "claude-haiku-4.5",
6 "ANTHROPIC_DEFAULT_SONNET_MODEL": "claude-sonnet-4.6",
7 "ANTHROPIC_DEFAULT_OPUS_MODEL": "claude-opus-4.6",
8 "API_TIMEOUT_MS": "3000000"
9 },
10 "model": "sonnet[1m]"
11}OpenAI Codex CLI
Create or edit ~/.codex/config.toml with:
1model_provider = "navyai"
2model = "gpt-5.2"
3
4[model_providers.navyai]
5name = "NavyAI via Chat Completions"
6base_url = "https://api.navy/v1"
7env_key = "NAVYAI_API_KEY"Then set NAVYAI_API_KEY in your shell environment and run codex.
Roo Code
- Open Roo Code, click the gear icon, then go to Settings → API Configuration
- Set Provider Type to
OpenAI Compatible - Set Base URL to
https://api.navy/v1 - Set API Key to your NavyAI key
- Pick any model (e.g.
claude-sonnet-4.6) - Click Save
OpenClaw
Create a .env file in your project root:
1LLM_PROVIDER="openai"
2LLM_BASE_URL="https://api.navy/v1"
3LLM_API_KEY="sk-navy-YOURKEYHERE"
4LLM_MODEL="claude-sonnet-4.6"Then run openclaw start.
Other agents
Works with any OpenAI- or Anthropic-compatible tool: set the base URL and API key as above and choose a model from the NavyAI catalog.
Recommended defaults (HTTP)
- Use
POST /v1/chat/completionsif your agent already speaks OpenAI chat - Use
POST /v1/messagesif your agent expects Anthropic Messages - Use
POST /v1/responsesif you want a more unified OpenAI-style input and output shape - Enable
stream: truefor long-form generations and tool-heavy flows - Call
GET /v1/modelsduring startup or on a short cache window to build dynamic model pickers
Structured output
If your agent expects schemas, use response_format on chat completions or text.format on responses.