OpenAI Chat Completions (HTTP)
OpenClaw’s Gateway can serve a small OpenAI-compatible Chat Completions endpoint. This endpoint is disabled by default. Enable it in config first.POST /v1/chat/completions- Same port as the Gateway (WS + HTTP multiplex):
http://<gateway-host>:<port>/v1/chat/completions
GET /v1/modelsGET /v1/models/{id}POST /v1/embeddingsPOST /v1/responses
openclaw agent), so routing/permissions/config match your Gateway.
Authentication
Uses the Gateway auth configuration. Send a bearer token:Authorization: Bearer <token>
- When
gateway.auth.mode="token", usegateway.auth.token(orOPENCLAW_GATEWAY_TOKEN). - When
gateway.auth.mode="password", usegateway.auth.password(orOPENCLAW_GATEWAY_PASSWORD). - If
gateway.auth.rateLimitis configured and too many auth failures occur, the endpoint returns429withRetry-After.
Security boundary (important)
Treat this endpoint as a full operator-access surface for the gateway instance.- HTTP bearer auth here is not a narrow per-user scope model.
- A valid Gateway token/password for this endpoint should be treated like an owner/operator credential.
- Requests run through the same control-plane agent path as trusted operator actions.
- There is no separate non-owner/per-user tool boundary on this endpoint; once a caller passes Gateway auth here, OpenClaw treats that caller as a trusted operator for this gateway.
- For shared-secret auth modes (
tokenandpassword), the endpoint restores the normal full operator defaults even if the caller sends a narrowerx-openclaw-scopesheader. - Trusted identity-bearing HTTP modes (for example trusted proxy auth or
gateway.auth.mode="none") still honor the declared operator scopes on the request. - If the target agent policy allows sensitive tools, this endpoint can use them.
- Keep this endpoint on loopback/tailnet/private ingress only; do not expose it directly to the public internet.
gateway.auth.mode="token"or"password"+Authorization: Bearer ...- proves possession of the shared gateway operator secret
- ignores narrower
x-openclaw-scopes - restores the full default operator scope set
- treats chat turns on this endpoint as owner-sender turns
- trusted identity-bearing HTTP modes (for example trusted proxy auth, or
gateway.auth.mode="none"on private ingress)- authenticate some outer trusted identity or deployment boundary
- honor the declared
x-openclaw-scopesheader - only get owner semantics when
operator.adminis actually present in those declared scopes
Agent-first model contract
OpenClaw treats the OpenAImodel field as an agent target, not a raw provider model id.
model: "openclaw"routes to the configured default agent.model: "openclaw/default"also routes to the configured default agent.model: "openclaw/<agentId>"routes to a specific agent.
x-openclaw-model: <provider/model-or-bare-id>overrides the backend model for the selected agent.x-openclaw-agent-id: <agentId>remains supported as a compatibility override.x-openclaw-session-key: <sessionKey>fully controls session routing.x-openclaw-message-channel: <channel>sets the synthetic ingress channel context for channel-aware prompts and policies.
model: "openclaw:<agentId>"model: "agent:<agentId>"
Enabling the endpoint
Setgateway.http.endpoints.chatCompletions.enabled to true:
Disabling the endpoint
Setgateway.http.endpoints.chatCompletions.enabled to false:
Session behavior
By default the endpoint is stateless per request (a new session key is generated each call). If the request includes an OpenAIuser string, the Gateway derives a stable session key from it, so repeated calls can share an agent session.
Why this surface matters
This is the highest-leverage compatibility set for self-hosted frontends and tooling:- Most Open WebUI, LobeChat, and LibreChat setups expect
/v1/models. - Many RAG systems expect
/v1/embeddings. - Existing OpenAI chat clients can usually start with
/v1/chat/completions. - More agent-native clients increasingly prefer
/v1/responses.
Model list and agent routing
What does `/v1/models` return?
What does `/v1/models` return?
An OpenClaw agent-target list.The returned ids are
openclaw, openclaw/default, and openclaw/<agentId> entries.
Use them directly as OpenAI model values.Does `/v1/models` list agents or sub-agents?
Does `/v1/models` list agents or sub-agents?
It lists top-level agent targets, not backend provider models and not sub-agents.Sub-agents remain internal execution topology. They do not appear as pseudo-models.
Why is `openclaw/default` included?
Why is `openclaw/default` included?
openclaw/default is the stable alias for the configured default agent.That means clients can keep using one predictable id even if the real default agent id changes between environments.How do I override the backend model?
How do I override the backend model?
Use
x-openclaw-model.Examples:
x-openclaw-model: openai/gpt-5.4
x-openclaw-model: gpt-5.4If you omit it, the selected agent runs with its normal configured model choice.How do embeddings fit this contract?
How do embeddings fit this contract?
/v1/embeddings uses the same agent-target model ids.Use model: "openclaw/default" or model: "openclaw/<agentId>".
When you need a specific embedding model, send it in x-openclaw-model.
Without that header, the request passes through to the selected agent’s normal embedding setup.Streaming (SSE)
Setstream: true to receive Server-Sent Events (SSE):
Content-Type: text/event-stream- Each event line is
data: <json> - Stream ends with
data: [DONE]
Open WebUI quick setup
For a basic Open WebUI connection:- Base URL:
http://127.0.0.1:18789/v1 - Docker on macOS base URL:
http://host.docker.internal:18789/v1 - API key: your Gateway bearer token
- Model:
openclaw/default
GET /v1/modelsshould listopenclaw/default- Open WebUI should use
openclaw/defaultas the chat model id - If you want a specific backend provider/model for that agent, set the agent’s normal default model or send
x-openclaw-model
openclaw/default, most Open WebUI setups can connect with the same base URL and token.
Examples
Non-streaming:/v1/modelsreturns OpenClaw agent targets, not raw provider catalogs.openclaw/defaultis always present so one stable id works across environments.- Backend provider/model overrides belong in
x-openclaw-model, not the OpenAImodelfield. /v1/embeddingssupportsinputas a string or array of strings.