Documentation

Chat Completions

POST /v1/chat/completions is the main OpenAI-compatible text endpoint. It supports multi-turn conversations, streaming, vision (image inputs), tool/function calling, and various parameters for controlling output.

Use it when

  • Your app already uses the OpenAI Chat Completions format
  • You want easy migration from another OpenAI-compatible provider
  • You want streaming deltas and broad provider coverage

Code examples

1curl -X POST https://api.navy/v1/chat/completions \
2  -H "Authorization: Bearer sk-navy-YOUR_KEY" \
3  -H "Content-Type: application/json" \
4  -d '{
5    "model": "gpt-5",
6    "messages": [
7      {"role": "system", "content": "You are a concise release assistant."},
8      {"role": "user", "content": "Write three product taglines."}
9    ],
10    "stream": false
11  }'

Parameters

  • model (string, required) — Model ID such as gpt-5, claude-sonnet-4, or gemini-2.5-pro
  • messages (array, required) — Chat history with role and content. Content can be a string or an array of parts (text, image_url) for vision models
  • max_tokens (integer, optional) — Maximum tokens to generate
  • temperature (number, optional) — Sampling temperature (0.0–2.0)
  • top_p (number, optional) — Nucleus sampling threshold (0.0–1.0)
  • top_k (integer, optional) — Top-K sampling parameter (supported by some providers)
  • stream (boolean, optional) — Enable streaming responses via SSE
  • stop (string or array, optional) — Stop sequence(s) to end generation
  • seed (integer, optional) — Random seed for reproducible outputs
  • frequency_penalty (number, optional) — Penalize repeated tokens (−2.0 to 2.0)
  • presence_penalty (number, optional) — Penalize tokens based on presence (−2.0 to 2.0)
  • reasoning_effort (string, optional) — "none", "minimal", "low", "medium", "high", "xhigh" for thinking models
  • response_format (object, optional) — { type: "json_object" }, { type: "json_schema", json_schema: {...} }, or { type: "text" }
  • tools (array, optional) — Tool/function definitions for function calling
  • tool_choice (string or object, optional) — "auto", "none", "required", or { type: "function", function: { name: "..." } }

Vision support

Many models support image inputs via the image_url content type in messages. Supports both URLs and base64 data URIs.

JSON
1{
2  "role": "user",
3  "content": [
4    { "type": "text", "text": "What's in this image?" },
5    { "type": "image_url", "image_url": { "url": "https://example.com/image.png" } }
6  ]
7}

Notes

  • Streaming returns OpenAI-style SSE chunks followed by data: [DONE]
  • Usage is tracked against your daily plan limits with model multipliers
Docs Assistant
I’m here to help with NavyAI docs. Ask about endpoints, auth, models, request bodies, or integration details.