Request response from the model

Invokes inference using the model chat parameters. If uploading large images, this POST should be used in conjunction with the NVCF API which allows for the upload of large assets.
You can find details on how to use NVCF Asset APIs here: https://docs.api.nvidia.com/cloud-functions/reference/createasset

Body Params
messages
array of objects
required
length between 1 and 1024

A list of messages comprising the conversation so far. The roles of the messages must be alternating between user and assistant. The last input message should have role user or assistant. A message with the system role is optional, and must be the very first message if it is present.

Messages*
string
enum
required

The role of the message author.

Allowed:
required

The contents of the message.

Can only be null as part of a last request message with role=assistant (for "completion mode", i.e. providing the beginning of the assistant response).

To pass images (only with role=user):

- When content is a string, images can be passed together with the text with img HTML tags with base64 data: <img src="data:image/{format};base64,{base64encodedimage}" /> .
If the size of an image is more than 180KB, it needs to be uploaded to a presigned S3 bucket using NVCF Asset APIs. Once uploaded you can refer to it using the following format: <img src="data:image/png;asset_id,{asset_id}" /> .

- When content is a list of objects, images can be passed with objects with type=image_url, and image_url containing the base64 image data: data:image/{format};base64,{base64encodedimage}. HTML img tags will not be parsed from objects with type=text.

- In both cases, images can be PNG, JPG or JPEG.

For system and assistant roles, the object list format is not supported.

string
required
Defaults to nvidia/llama-3.1-nemotron-nano-vl-8b-v1

The model to use.

number
0 to 1
Defaults to 1

The sampling temperature to use for text generation. The higher the temperature value is, the less deterministic the output text will be. It is not recommended to modify both temperature and top_p in the same call.

number
≤ 1
Defaults to 0.01

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. NVIDIA recommends that you alter this option or temperature but not both.

integer
1 to 2048
Defaults to 1024

The maximum number of tokens to generate in any given call. Note that the model is not aware of this value, and generation will simply stop at the number of tokens specified.

-9223372036854776000 to 9223372036854776000

If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events (SSE) as they become available (JSON responses are prefixed by data: ), with the stream terminated by a data: [DONE] message.

Headers
uuid
length ≤ 370

String of asset IDs separated by commas. Data is uploaded to AWS S3 using NVCF Asset APIs and associated with these asset IDs.If the size of an image is more than 180KB, it needs to be uploaded to a presigned S3 URL bucket. The presigned URL allows for secure and temporary access to the S3 bucket for uploading the image. Once the asset is requested, an asset ID is generated for it. Please include this asset ID in this header and to use the uploaded image in a prompt, you need to refer to it using the following format: <img src="data:image/png;asset_id,{asset_id}" />.

string
enum
Defaults to application/json

Generated from available response content types

Allowed:
Responses

Language
Credentials
Bearer
Response
Click Try It! to start a request and see the response here! Or choose an example:
application/json
text/event-stream
country_code