Documentation

Responses

POST /v1/responses gives you the newer OpenAI Responses API style in a single endpoint.

Use it when

  • You want a unified input/output shape
  • Your app already uses OpenAI Responses
  • You want schema-driven text output and reasoning controls

Code examples

1curl -X POST https://api.navy/v1/responses \
2  -H "Authorization: Bearer sk-navy-YOUR_KEY" \
3  -H "Content-Type: application/json" \
4  -d '{
5    "model": "gpt-5",
6    "input": "Summarize the changelog in four bullets."
7  }'

Parameters

  • model (string, required) — Model ID to use
  • input (string or array, required) — Prompt or array of input items
  • stream (boolean, optional) — Stream response as SSE
  • instructions (string, optional) — System-level instructions
  • tools (array, optional) — Function tool definitions
  • tool_choice (string or object, optional) — "auto", "none", "required"
  • temperature (number, optional) — 0.0–2.0
  • top_p (number, optional) — Nucleus sampling threshold
  • max_output_tokens (integer, optional) — Maximum tokens to generate
  • reasoning (object, optional) — Reasoning controls for thinking models
  • text (object, optional) — Text format controls including JSON schema

Notes

  • OpenAI-owned models are forwarded in native responses format
  • Non-OpenAI models are adapted into a compatible response shape
Docs Assistant
I’m here to help with NavyAI docs. Ask about endpoints, auth, models, request bodies, or integration details.