Creates a model response for the given chat conversation.

Given a list of messages comprising a conversation, the model will return a response. Compatible with OpenAI. See https://platform.openai.com/docs/api-reference/chat/create

Recent Requests
Log in to see full request history
TimeStatusUser Agent
Retrieving recent requests…
LoadingLoading…
Body Params
string
Defaults to nvidia/nemotron-3-super-120b-a12b
messages
array of objects
required

A list of messages comprising the conversation so far. The roles of the messages must be alternating between user and assistant. The last input message should have role user. A message with the the system role is optional, and must be the very first message if it is present; context is also optional, but must come before a user question.

Messages*
number
≤ 1
Defaults to 1

The sampling temperature to use for text generation. The higher the temperature value is, the less deterministic the output text will be. It is not recommended to modify both temperature and top_p in the same call.

number
≤ 1
Defaults to 0.95

The top-p sampling mass used for text generation. The top-p value determines the probability mass that is sampled at sampling time. For example, if top_p = 0.2, only the most likely tokens (summing to 0.2 cumulative probability) will be sampled. It is not recommended to modify both temperature and top_p in the same call.

integer
1 to 32768
Defaults to 16384

The maximum number of tokens to generate in any given call. Note that the model is not aware of this value, and generation will simply stop at the number of tokens specified.

enum
Defaults to high

Controls Super's reasoning mode. none disables reasoning tokens, low enables low-effort reasoning, and high enables full reasoning. Snippets translate this field into the model's chat_template_kwargs.

Allowed:
integer
-1 to 32768
Defaults to 16384

Maximum number of tokens the model is allowed to use for internal reasoning
("thinking") before it is forced to end the reasoning trace. Use -1 to disable
budget enforcement. This can also be provided via
chat_template_kwargs.reasoning_budget for backwards compatibility; if both
are provided, chat_template_kwargs.reasoning_budget takes precedence.
This is typically most useful with reasoning_effort: "high".

0 to 18446744073709552000

If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

boolean
Defaults to true

If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events (SSE) as they become available (JSON responses are prefixed by data: ), with the stream terminated by a data: [DONE] message.

stop

A string or a list of strings where the API will stop generating further tokens. The returned text will not contain the stop sequence.

Headers
string
enum
Defaults to application/json

Generated from available response content types

Allowed:
Responses

Language
Credentials
Bearer
LoadingLoading…
Response
Click Try It! to start a request and see the response here! Or choose an example:
application/json
text/event-stream
country_code