nvidia / nemotron-3-nano-omni-30b-a3b-reasoning

Model Overview

Description:

NVIDIA Nemotron 3 Nano Omni is a multimodal large language model that unifies video, audio, image, and text understanding to support enterprise-grade Q&A, summarization, transcription, and document intelligence workflows. It extends the Nemotron Nano family with integrated video+speech comprehension, Graphical User Interface (GUI), Optical Character Recognition (OCR), and speech transcription capabilities, enabling end-to-end processing of rich enterprise content such as meeting recordings, M&E assets, training videos, and complex business documents. NVIDIA Nemotron 3 Nano Omni was developed by NVIDIA as part of the Nemotron model family.

This model is available for commercial use.

This model was improved using Qwen3-VL-30B-A3B-Instruct, Qwen3.5-122B-A10B, Qwen3.5-397B-A17B, Qwen2.5-VL-72B-Instruct, and gpt-oss-120b. For more information, please see the Training Dataset section below.

License/Terms of Use

Governing Terms: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Open Model Agreement.

Deployment Geography:

Global

Use Case:

This model is designed for enterprise customers requiring multimodal understanding capabilities. Expected users include:

  • Customer service applications (e.g., Doordash video of drop-off at a given address via OCR, drive-thru order verification)
  • Media and Entertainment (M&E) — video and speech analysis, dense captions, video search and summarization
  • Document intelligence for AI assistants (contracts, SOW/MSA, scientific discovery, financial documents)
  • GUI automation for AI agentic applications (incident management, agentic search, browser agents, email agents)

Release Date:

Build.Nvidia.com 04/28/2026 via URL

Hugging Face 04/28/2026 via:

NGC 04/28/2026 via URL

Model Architecture:

Architecture Type: Mamba2-Transformer Hybrid Mixture of Experts (MoE)

Network Architecture:

Number of model parameters: 3.1 x 10^10 (31B A3B)

Input(s):

Input Type(s): Video, Audio, Image, Text

Input Format(s):

  • Video: mp4, up to 2 minutes. For 1080p videos, sample up to 1 FPS / 128 frames. For lower-resolution videos such as 720p, higher temporal sampling such as 2 FPS / 256 frames may be used.
  • Audio: wav, mp3 files (up to 1 hour), 8kHz and higher sampling rates
  • Image: Red, Green, Blue (RGB) (jpeg, png)
  • Text: String

Input Parameters:

  • Video: Three-Dimensional (3D)
  • Audio: One-Dimensional (1D)
  • Image: Two-Dimensional (2D)
  • Text: One-Dimensional (1D)

Other Properties Related to Input:

  • Maximum context length up to 256k tokens
  • Language support: English only

Output(s)

Output Type(s): Text

Output Format(s):

  • Text: String

Output Parameters:

  • Text: One-Dimensional (1D)

Other Properties Related to Output:

  • Maximum context length up to 256k tokens.
  • Supports JSON output format
  • Supports reasoning output with chain-of-thought
  • Supports tool calling
  • Supports word-level timestamps for transcription

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engine(s):

  • vLLM
  • NeMo
  • Megatron
  • NeMo-RL

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere (A100 80GB SXM/NVLink)
  • NVIDIA Blackwell (B200 SXM/NVLink, RTX Pro 6000 SE, DGX Spark, Jetson Thor, RTX 5090)
  • NVIDIA Hopper (H100 SXM/NVLink, H200 SXM/NVLink)
  • NVIDIA Lovelace (L40S)

Preferred/Supported Operating System(s):

  • Linux

Inference Runtimes:

  • vLLM
  • TensorRT LLM
  • TensorRT Edge-LLM
  • llama.cpp
  • Ollama
  • SGLang

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

This AI model can be embedded as an Application Programming Interface (API) call into the software environment described above.

Model Version(s):

Nemotron-3-Nano-Omni-30B-A3B-Reasoning


Quick Start Guide

Model Parameters

Modetemperaturetop_ptop_kmax_tokensreasoning_budgetgrace_period
Thinking mode0.60.9520480163841024
Instruct mode0.211024

Download Model Weights

PrecisionTechnical NameHuggingFace URL
BF16Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16
FP8Nemotron-3-Nano-Omni-30B-A3B-Reasoning-FP8https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-FP8
NVFP4Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4

Install the HuggingFace CLI

pip install -U "huggingface_hub[hf_xet]"

# Log in once; the token is cached at ~/.cache/huggingface/token
hf auth login

# Sanity check: should print your username and orgs
hf auth whoami

Download the weights

Pick a target directory on a volume with ≥70 GB free (the model is ~62 GB).

WEIGHTS=/path/to/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16

hf download nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16 \
  --local-dir "$WEIGHTS" \
  --max-workers 8

Notes:

  • hf download is resumable — re-run the same command if the connection drops.
  • --max-workers 8 parallelizes downloads; tune up on fast networks.
  • The hf_xet extra enables native Xet-protocol transfers for Xet-backed repos; no need for git-xet or git-lfs when using hf download.

Verify the download

ls "$WEIGHTS" | head
du -sh "$WEIGHTS"  # expect ~62 GB
test -f "$WEIGHTS/config.json" && echo OK

vLLM

Required version: vLLM 0.20.0 is needed. This means one of these containers:

Container

docker pull vllm/vllm-openai:v0.20.0

Audio support: Within the vLLM container, before running vllm serve, if any audio will be used (including passing use_audio_in_video: true):

python3 -m pip install "vllm[audio]"

General Invocation (1×GPU, e.g. 1×B200)

# vllm serve nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16 \
# vllm serve nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-FP8 \
vllm serve nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4 \
  --host 0.0.0.0 \
  --max-model-len 131072 \
  --tensor-parallel-size 1 \
  --trust-remote-code \
  --video-pruning-rate 0.5 \
  --max-num-seqs 384 \
  --allowed-local-media-path / \
  --media-io-kwargs '{"video": {"fps": 2, "num_frames": 256}}' \
  --reasoning-parser nemotron_v3 \
  --enable-auto-tool-choice \
  --tool-call-parser qwen3_coder \
  --kv-cache-dtype fp8 # Omit this for BF16

Platform-Specific Notes

RTX Pro: Due to a current bug with FlashInfer + RTX Pro, append: --moe-backend triton

NVFP4 + TP>1: Due to a current bug with the TRTLLM_GEN MoE backend kernels on vLLM, when running with TP>1 on NVFP4, append: --moe-backend flashinfer_cutlass

vLLM on DGX Spark (aarch64 / ARM64)

For everything not covered here (API examples, reasoning mode, video tuning), follow the general instructions.

1. Pull the container image

Use the upstream multi-arch vLLM v0.20.0 docker image. Docker will automatically pull the arm64 variant.

docker pull vllm/vllm-openai:v0.20.0
2. Launch the vLLM server on Spark
WEIGHTS=/path/to/nemotron-3-nano-omni-weights

# The image does not include audio packages so we need to install them with "pip install vllm[audio]" as done in the command below
docker run --rm -it \
  --gpus all \
  --ipc=host -p 8000:8000 \
  --shm-size=16g \
  --name vllm-nemotron-omni \
  -v "${WEIGHTS}:/model:ro" \
  --entrypoint /bin/bash \
  vllm/vllm-openai:v0.20.0 -c  \
  "pip install vllm[audio] && vllm serve /model \
  --served-model-name=nemotron_3_nano_omni \
  --max-num-seqs 8 \
  --max-model-len 131072 \
  --port 8000 \
  --trust-remote-code \
  --gpu-memory-utilization 0.8 \
  --limit-mm-per-prompt '{\"video\": 1, \"image\": 1, \"audio\": 1}' \
  --media-io-kwargs '{\"video\": {\"fps\": 2,  \"num_frames\": 256}}' \
  --allowed-local-media-path=/ \
  --enable-prefix-caching \
  --max-num-batched-tokens 32768 \
  --reasoning-parser nemotron_v3 \
  --enable-auto-tool-choice \
  --tool-call-parser qwen3_coder"

In another terminal, verify the server is ready:

curl -sS http://localhost:8000/v1/models | python3 -m json.tool
Key Spark-Specific Flags
FlagPurposeSpark Guidance
--gpus allSelect GPUSpark has one GB10 GPU; all is equivalent to device=0
--max-model-lenMax context windowStart at 131072; reduce if you hit OOM (see Memory Tuning below)
Memory Tuning on Spark

Spark uses unified LPDDR5X memory (~128 GB shared between CPU and GPU), not separate system + VRAM pools. Two levers, in order of impact:

  1. Lower --gpu-memory-utilization from 0.85 → 0.70 to free ~19 GB back to the OS and re-enable weight prefetch. Cost: smaller KV cache budget.
  2. Lower --max-model-len to reduce KV cache allocation (e.g. halving context window halves KV cache at --max-num-seqs=1).
    Combined override:
  --gpu-memory-utilization=0.70 \
  --max-model-len=32768 \

TensorRT-LLM

This model can also be deployed with TensorRT-LLM - see relevant instructions here.

Platform-Specific Notes

TensorRT Edge-LLM

This model can also be deployed with TensorRT Edge-LLM on NVIDIA Jetson Thor - see the Jetson AI Lab model page and the TensorRT Edge-LLM Quick Start Guide.


SGLang

The BF16 variant of this model is supported on SGLang, with the following images:

librosa must be installed first:
pip install librosa --break-system-packages

To serve:
sglang serve --model-path nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16 --trust-remote-code

[!NOTE]
NVFP4 and FP8 support to come.

Platform-Specific Notes

SGLang on DGX Spark (aarch64 / ARM64)

For everything not covered here (API examples, reasoning mode, video tuning), follow the general instructions.

1. Pull the container image

Use the upstream multi-arch CUDA 13.0 docker image linked above. Docker will automatically pull the arm64 variant.

docker pull lmsysorg/sglang:dev-cu13-nemotronh-nano-omni-reasoning-v3
2. Launch the SGLang server on Spark
WEIGHTS=/path/to/nemotron-3-nano-omni-weights

# The image does not include audio packages so we need to install them with "pip install librosa" as done in the command below
docker run --gpus all -it --rm \
  -p 30000:30000 \
  -v "${WEIGHTS}:/model:ro" \
  --shm-size 16g \
  lmsysorg/sglang:dev-cu13-nemotronh-nano-omni-reasoning-v3 \
  bash -c "pip install librosa && python3 -m sglang.launch_server --model-path /model \
  --host 0.0.0.0 \
  --port 30000 \
  --trust-remote-code \
  --mem-fraction-static 0.8 \
  --max-running-requests 8 \
  --tool-call-parser qwen3_coder \
  --reasoning-parser nemotron_3"

In another terminal, verify the server is ready:

curl -sS http://localhost:30000/v1/models | python3 -m json.tool
Key Spark-Specific Flags
FlagPurposeSpark Guidance
--gpus allSelect GPUSpark has one GB10 GPU; all is equivalent to device=0
--context-lengthMax context windowStart with default; reduce if you hit OOM (see Memory Tuning below)
Memory Tuning on Spark

Spark uses unified LPDDR5X memory (~128 GB shared between CPU and GPU), not separate system + VRAM pools. Two levers, in order of impact:

  1. Lower --mem-fraction-static from 0.80 → 0.70 to free ~13 GB back to the OS and re-enable weight prefetch. Cost: smaller KV cache budget.
  2. Lower --context-length to reduce KV cache allocation (e.g. halving context window halves KV cache at --max-running-requests=1).
    Combined override:
  --mem-fraction-static=0.70 \
  --context-length=32768 \

API Client (OpenAI-compatible)

from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="")
MODEL = "nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4"

Image Example

import base64

def image_to_data_url(path: str) -> str:
    with open(path, "rb") as f:
        b64 = base64.b64encode(f.read()).decode("utf-8")
    return f"data:image/jpeg;base64,{b64}"

image_url = image_to_data_url("media/example1a.jpeg")

response = client.chat.completions.create(
    model=MODEL,
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "Describe this image in detail."},
                {"type": "image_url", "image_url": {"url": image_url}},
            ],
        }
    ],
    max_tokens=1024,
    temperature=1.0,
    extra_body={"top_k": 1, "chat_template_kwargs": {"enable_thinking": False}},
)
print(response.choices[0].message.content)

Audio Example

from pathlib import Path

audio_url = Path("media/2414-165385-0000.wav").resolve().as_uri()

response = client.chat.completions.create(
    model=MODEL,
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "audio_url", "audio_url": {"url": audio_url}},
                {"type": "text", "text": "Transcribe this audio."},
            ],
        }
    ],
    max_tokens=1024,
    temperature=1.0,
    extra_body={"top_k": 1, "chat_template_kwargs": {"enable_thinking": False}},
)
print(response.choices[0].message.content)

Video Example

from pathlib import Path

video_url = Path("media/demo.mp4").resolve().as_uri()
reasoning_budget = 16384
grace_period = 1024

response = client.chat.completions.create(
    model=MODEL,
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "video_url", "video_url": {"url": video_url}},
                {"type": "text", "text": "Describe this video."},
            ],
        }
    ],
    max_tokens=20480,
    temperature=0.6,
    top_p=0.95,
    extra_body={
        "thinking_token_budget": reasoning_budget + grace_period,
        "chat_template_kwargs": {
            "enable_thinking": True,
            "reasoning_budget": reasoning_budget,
        },
        "mm_processor_kwargs": {"use_audio_in_video": False},
    },
)
print(response.choices[0].message.content)

Text Example (curl)

curl -sS http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model":"nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4","messages":[{"role":"user","content":"Hello, what can you do?"}],"temperature":1.0,"top_k":1}' \
  | python3 -c "import sys,json; print(json.load(sys.stdin)['choices'][0]['message']['content'])"

PDF Example (page-by-page via Python)

The API accepts images, not raw PDF files. The script below renders each page to PNG and sends it as base64. Save as pdf_vlm_chat.py and install dependencies: pip install pymupdf pillow requests.

pdf_vlm_chat.py (click to expand)
#!/usr/bin/env python3
"""Send PDF page(s) as images to a vLLM /v1/chat/completions endpoint."""
from __future__ import annotations

import argparse, base64, sys
from io import BytesIO
from pathlib import Path

import requests

try:
    import fitz
    from PIL import Image
except ImportError:
    print("Install: pip install pymupdf pillow requests", file=sys.stderr)
    sys.exit(1)

USER_PROMPT = (
    "Summarize this PDF page: main topic, section headings, important facts "
    "or bullets, and a brief note on each figure or table. "
    "Do not invent text you cannot read."
)
API_URL = "http://localhost:8000/v1/chat/completions"
MODEL = "nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4"
MAX_TOKENS = 32000
DPI = 150


def page_to_b64(pdf_path: str, idx: int) -> str:
    doc = fitz.open(pdf_path)
    z = DPI / 72.0
    pix = doc.load_page(idx).get_pixmap(matrix=fitz.Matrix(z, z))
    img = Image.frombytes("RGB", [pix.width, pix.height], pix.samples)
    doc.close()
    buf = BytesIO()
    img.save(buf, format="PNG")
    return base64.b64encode(buf.getvalue()).decode("ascii")


def chat(url, model, b64, text, max_tokens):
    r = requests.post(url, json={
        "model": model,
        "messages": [{"role": "user", "content": [
            {"type": "text", "text": text},
            {"type": "image_url", "image_url": {"url": f"data:image/png;base64,{b64}"}},
        ]}],
        "max_tokens": max_tokens,
        "stream": False,
        "temperature": 1.0,
        "chat_template_kwargs": {"enable_thinking": False},
    }, timeout=120)
    r.raise_for_status()
    return r.json()["choices"][0]["message"]["content"]


def main():
    p = argparse.ArgumentParser()
    p.add_argument("pdf")
    p.add_argument("--page", type=int, default=0)
    p.add_argument("--all-pages", action="store_true")
    p.add_argument("-o", "--output")
    p.add_argument("--url", default=API_URL)
    p.add_argument("--model", default=MODEL)
    p.add_argument("--max-tokens", type=int, default=MAX_TOKENS)
    a = p.parse_args()

    doc = fitz.open(a.pdf); n = len(doc); doc.close()
    pages = range(n) if a.all_pages else [a.page]
    parts = [f"# Extracted: {Path(a.pdf).name}\n\n*Pages: {n}*\n"] if a.all_pages else []

    for i in pages:
        print(f"Page {i+1}/{n} ...", file=sys.stderr)
        b64 = page_to_b64(a.pdf, i)
        text = chat(a.url, a.model, b64, f"Page {i+1}.\n\n{USER_PROMPT}", a.max_tokens)
        parts.append(f"\n---\n\n## Page {i+1}\n\n{text.strip()}\n" if a.all_pages else text.strip())

    out = "\n".join(parts)
    if a.output:
        Path(a.output).write_text(out + "\n", encoding="utf-8")
    else:
        print(out)

if __name__ == "__main__":
    main()

Single page:

python3 pdf_vlm_chat.py /path/to/your_document.pdf --page 0

All pages to markdown:

python3 pdf_vlm_chat.py /path/to/your_document.pdf --all-pages -o extracted.md

Edit USER_PROMPT in the script for different tasks (detailed extraction, table parsing, etc.).


Reasoning Mode (enable_thinking)

SettingBehavior
Default (omitted)Reasoning is on. The model emits chain-of-thought before the final answer, visible in content.
"chat_template_kwargs": {"enable_thinking": false}Reasoning is off. Only the final answer appears in content.

To disable reasoning on a request, add to the JSON body:

"chat_template_kwargs": {"enable_thinking": false}

In the Python heredoc pattern, use False (Python boolean), not false (invalid Python).

We recommend thinking mode for tasks that involve reasoning and complex understanding. For video, audio, and omni use cases, try both enabling and disabling thinking for best results.


Advanced: Budget-Controlled Reasoning
from typing import Any, Dict, List

from openai import OpenAI
from transformers import AutoTokenizer


class ThinkingBudgetClient:
    def __init__(self, base_url: str, api_key: str, tokenizer_name_or_path: str):
        self.tokenizer = AutoTokenizer.from_pretrained(
            tokenizer_name_or_path, trust_remote_code=True
        )
        self.client = OpenAI(base_url=base_url, api_key=api_key)

    def chat_completion(
        self,
        model: str,
        messages: List[Dict[str, Any]],
        reasoning_budget: int = 512,
        max_tokens: int = 1024,
        **kwargs,
    ) -> Dict[str, Any]:
        assert max_tokens > reasoning_budget, (
            f"reasoning_budget must be less than max_tokens. "
            f"Got {max_tokens=} and {reasoning_budget=}"
        )

        # Step 1: generate only the reasoning trace up to the requested budget.
        response = self.client.chat.completions.create(
            model=model,
            messages=messages,
            max_tokens=reasoning_budget,
            extra_body={
                "top_k": 1,
                "chat_template_kwargs": {
                    "enable_thinking": True,
                },
            },
            **kwargs,
        )
        reasoning_content = response.choices[0].message.content or ""
        if "</think>" not in reasoning_content:
            print("No </think> found in reasoning content")
            reasoning_content = f"{reasoning_content}</think>\n\n"

        reasoning_tokens_len = len(
            self.tokenizer.encode(reasoning_content, add_special_tokens=False)
        )
        remaining_tokens = max_tokens - reasoning_tokens_len
        assert remaining_tokens > 0, (
            f"No tokens remaining for response ({remaining_tokens=}). "
            "Increase max_tokens or lower reasoning_budget."
        )

        # Step 2: continue from the closed reasoning trace and ask for the final answer.
        continued_messages = messages + [
            {"role": "assistant", "content": reasoning_content}
        ]
        prompt = self.tokenizer.apply_chat_template(
            continued_messages,
            tokenize=False,
            continue_final_message=True,
        )
        response = self.client.completions.create(
            model=model,
            prompt=prompt,
            max_tokens=remaining_tokens,
            extra_body={"top_k": 1},
            **kwargs,
        )

        return {
            "reasoning_content": reasoning_content.strip(),
            "content": response.choices[0].text,
            "finish_reason": response.choices[0].finish_reason,
        }

Video Tuning

Frame sampling (--media-io-kwargs)

Without explicit settings, vLLM may default to ~32 frames per video regardless of length. Always set --media-io-kwargs at server launch (already included in the General Invocation above):

--media-io-kwargs '{"video": {"fps": 2, "num_frames": 256}}'

Recommended num_frames ranges (at fps=2):

GPU memoryRecommended num_frames range
80 GB (A100/H100)128–512
≤40 GB64–256

Higher values improve temporal coverage but increase VRAM and prefill time. Start at the low end of the range and increase as your workload and latency budget allow.


Notes

  1. Reasoning default: Reasoning is on by default. If you omit chat_template_kwargs, the model will produce chain-of-thought traces in content. This is appropriate for text and image inputs.
  2. Video frame sampling: The default (~32 frames) is too conservative for most real videos. Set --media-io-kwargs at server launch.
  3. PDF input format: The API does not accept raw PDF uploads. Render pages to PNG and send as base64 (see PDF Example above).
  4. max_tokens vs --max-model-len: max_tokens in the request caps only the completion (generated output). It cannot exceed the server's --max-model-len, which is the hard ceiling for prompt + completion combined. Increase the server flag if you need longer outputs.

Jetson Deployment

For Jetson deployments, vLLM, SGLang, Ollama, llama.cpp, and TensorRT Edge-LLM are supported inference frameworks; see the Jetson AI Lab model page for more details.

TensorRT Edge-LLM support is only for Jetson Thor; TensorRT-LLM is not supported on Jetson.


Training, Testing, and Evaluation Datasets:

Dataset Overview

Total Size: 354,587,705 data points (~717.0B tokens)

Total Number of Datasets: 1395 dataset entries

Dataset partition: Training [100%], Testing [N/A — evaluation benchmarks used separately], Validation [N/A — evaluation benchmarks used separately]

Time period for training data collection: 2019–2025

Time period for testing data collection: N/A (standard public benchmarks)

Time period for validation data collection: N/A (standard public benchmarks)

Dataset Description

Nemotron-Omni extends our commitment from text to multimodal, delivering the same level of openness across text, audio, image, and video.

Adapter and encoder training scale: ~127B tokens across mixed modalities spanning text+image, text+video, text+audio, and text+video+audio—reflecting real-world, contextualized interactions versus single-modality data.

Post-training for real-world tasks: ~124M curated examples across multimodal combinations (text+audio, text+image, text+video, and text+video+audio), structured to support document reasoning, computer use, and long-horizon workflows.

RL environments for agent training: 20 RL datasets across 25 environments covering 5 new multimodal tasks—visual grounding, chart and document understanding, vision-critical STEM problems, video understanding, and automatic speech recognition—extending Nemotron's RL pipeline beyond text into vision and audio.

Modality Breakdown:

ModalityDataset EntriesSamplesEst. Tokens (M)
text+audio220259,178,821143,533.1
text+image75070,143,901180,347.1
text+video24115,837,673239,631.5
text+video+audio1558,720,044152,499.2
text12707,187958.4
Total1395354,587,705716,969.2

Training data for Nemotron-Omni was assembled from a diverse collection of audio, image, video, and text datasets. Raw datasets were first converted into a standardized JSONL format with unified conversation-turn structure. Audio data was resampled to 16 kHz where needed. Image and video datasets were paired with question-answer annotations, often regenerated or refined using large vision-language models to improve quality and consistency. Quality filtering was applied using model-based judges to remove low-quality, unsafe, or off-topic samples. Deduplication and CSAM scanning were performed across all image datasets. Data was then packed into fixed-length sequences (32k, 128k, or 256k tokens) for efficient training.

Multiple safety measures were implemented throughout the data pipeline. All image/text datasets underwent CSAM (Child Sexual Abuse Material) scanning, with results tracked per dataset. Content safety filtering was applied using two independent safety judge models to flag and remove samples containing harmful content including weapons references, criminal planning, sexual content involving minors, harassment, hate speech, profanity, threats, violence, or suicide-related content. Synthetic data generation pipelines included explicit quality and safety filtering stages. Identity-fix processing was applied to correct potential biases in generated responses. The multi-stage pipeline (original → cleaned → clean+safe → clean+safe+holdout) ensured progressive refinement, with each stage removing additional problematic content.

We built on the base model, applying additional training, enhancements, and optimizations on top of it.

Public Datasets

DatasetSamples% of PublicTokens (M)Modality
MiraData28,252,30755.53%14,181.3text+audio+videohttps://github.com/mira-space/MiraData
laion-disco-12M7,507,57414.7%22,691.0text+audiohttps://laion.ai/blog/laion-disco-12m/
YouTube Video2,057,0004.0%15,390text+video
YouTube Video and Audio1,164,0002.2%18,730text+video+audio

Private Datasets

DatasetSamples% of PrivateTokens (M)Modality
Granary23,370,2748.0%1,471.7text+audio
SIFT-50M22,837,5007.8%5,241.7text+audio

Self-Sourced Synthetic Data

  • Overall Size: 41,502,625 samples across modalities: text+audio, text+image, text+video

  • Description of synthetic data generation methods:

Synthetic data generation (SDG) was used to improve data quality, generate reasoning traces, re-label annotations, and augment existing datasets. Methods include: re-captioning images and audio using vision-language models, generating question-answer pairs from existing media, producing thinking/reasoning chains for complex tasks, paraphrasing prompts for diversity, and applying model-based quality filtering.

NVIDIA-Sourced Synthetic Datasets

DatasetModalityCountModels Used
GroundCUAtext+image2,797,851gpt-oss-120b, Qwen3-VL-30B-A3B-Instruct
OpenImagestext+image2,556,412Qwen3-VL-30B-A3B-Instruct
MMTrailtext+audio1,620,533Qwen3-omni-captioner, gpt-oss-120B
Localized Narrativestext+image1,511,812Qwen3-VL-30B-A3B-Instruct
ALLaVAtext+image1,414,130Qwen3-VL-30B-A3B-Instruct
VGG-Soundtext+audio1,371,167Qwen3-omni-captioner, gpt-oss-120B
PIXMO-CAPtext+image1,308,838Qwen3-VL-30B-A3B-Instruct
TTS-Synthesized Nemotron-Nano-3 SFT Datatext+audio1,226,784NVIDIA Magpie TTS
MINT-1Ttext+image904,035Qwen3-VL-32B-Instruct, Gemini 3 Pro for filtering, Scene Text models (RTX) translate
ScaleCUAtext+image889,010Qwen3-VL-30B-A3B-Instruct
AgentNettext+image878,986Kimi-K2.5
Conceptual Captions 3M-30btext+image867,065Qwen3-VL-30B-A3B-Thinking-FP8
MetaMathQAtext+image860,656Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
Mulberry-SFT COTtext+image566,982GLM-4.1V-9B-Thinking, Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
CC for OCRtext+image522,595SwinDocSegmenter, DeepSeek OCR, Qwen3.5-122B-A10B, Qwen3-32B, Gemini 3 Flash Preview for filtering, GPT-4o mini for filtering & quality checks, Qwen3-VL-30B-A3B-Thinking-FP8, gpt-oss-120b
Charxiv-100Ktext+image272,104Qwen3-VL-235B-A22B-Instruct, Qwen3-VL-235B-A22B-Thinking, GPT-4o for filtering, Qwen3.5-122B-A10B
SwinDocSegmentertext+image207,200SwinDocSegmenter, DeepSeek OCR
CLEVRtext+image, text+video197,027Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
InternVL-Datatext+image185,395Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
Flickr30k Entitiestext+image154,760Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
Metropolis and Litatext+video150,434Qwen3.5-122B-A10B
TextCapstext+image136,911Commercial VILA model, Qwen3-VL-30B-A3B-Instruct
Vision R1 Llava CoTtext+image126,024GLM-4.1V-9B-Thinking
HC-STVGtext+video124,902NVIDIA relabeled using Qwen model (Qwen2.5-VL-72B-Instruct)
nvPDFtextext+image118,351gpt-oss-120b, Qwen3.5-122B-A10B
ChartQAtext+image111,602Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering, Qwen2-VL-72B (NV)
ECD-10k-Imagestext+image110,697Qwen3.5-122B-A10B
SAMA-COCOtext+image102,965gpt-oss-120B
VisualWebInstructtext+image97,746Earlier SDG, GLM-4.1V-9B-Thinking
Spatialtext+image95,532Microsoft Florence-2-large
DoubtNuttext+image94,919Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
Cosmos Nemotron SFTv13.9text+image92,128Qwen3-VL-30B-A3B-Instruct, Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
CrossTasktext+video76,495NVIDIA relabeled using Qwen model (Qwen2.5-VL-72B-Instruct)
RefCOCOtext+image69,850Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
Mantis Instructtext+image66,975Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
Visual7Wtext+image62,589Qwen3.5-122B-A10B
ScreenQAtext+image62,186Qwen3.5-122B-A10B
VQAV2text+image54,899Qwen3.5-122B-A10B
TallyQAtext+image50,073Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
KeenSighttext+image49,849Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
GQAtext+image42,182Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
AskFilotext+image41,807Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
Raventext+image41,996gpt-oss-120b
DocVQAtext+image35,759Qwen3.5-122B-A10B
TextVQAtext+image34,602Commercial VILA model, Qwen3-VL-30B-A3B-Instruct
COCOtext+image32,111Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
PlotQAtext+image30,665Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
Llavatext+video30,250Qwen3-Omni-30B-A3B-Instruct, Qwen3-VL-32B-Instruct
NVCLIPtext+image29,680Qwen2.5-72B-Instruct
Tapostext+video29,250Qwen2.5-VL-72B-Instruct
Vedantu Chemistrytext+audio26,338NVIDIA Magpie TTS
NV-CC-Img-Text-Datasettext+image24,998Qwen3-VL-30B-A3B-Instruct
DocLayNettext+image22,709Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering, gpt-oss-120b
Taloka Groundingtext+image22,218Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
Wikipedia OCRtext+image21,440Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
InternVL2.5text+image20,770Qwen3-VL-235B-A22B-Instruct, Qwen3-VL-235B-A22B-Thinking, GPT-4o for filtering, Qwen3.5-122B-A10B
PromptPGtext+image20,305Qwen2-VL-72B
PubTablestext+image20,174gpt-oss-120b
InfoVQAtext+image18,679Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
Azure Tablestext+image18,188gpt-oss-120b, Qwen3.5-122B-A10B
TabRecSettext+image17,437GPT-4o mini, Qwen3-VL-30B-A3B-Thinking-FP8, gpt-oss-120b, Qwen3.5-122B-A10B
CD Questionstext+audio, text+image16,335NVIDIA Magpie TTS, Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
Linguistic Data Consortiumtext+image15,499Qwen3.5-122B-A10B, GPT-4o mini, Qwen3-VL-30B-A3B-Thinking-FP8, gpt-oss-120b, Ask Kateryna
MapQAtext+image12,480Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
SlideVQAtext+image11,199Qwen3.5-122B-A10B
OCR Reason Financetext+image9,389Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
GeomVersetext+image9,298GLM-4.1V-9B-Thinking
NextQAtext+video8,903Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
UniGeotext+image8,822Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
Vedantutext+audio, text+image8,750NVIDIA Magpie TTS, Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
GPQAtext+audio7,657NVIDIA Magpie TTS
SLAKEtext+image7,294Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
OpenGVLabtext+image7,269Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering, Qwen3-VL-235B-A22B-Instruct, Qwen3-VL-235B-A22B-Thinking, GPT-4o for filtering
PerceptionTesttext+video5,192Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
InvoicesQAtext+image4,817Qwen3.5-122B-A10B
EgoProceltext+video4,660Qwen2.5-VL-72B-Instruct
SynthTabNettext+image4,364gpt-oss-120b
SerpAPItext+image3,784Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering
FinTabNettext+image3,852gpt-oss-120b
FastMathtext+image3,718Qwen3-VL-235B-A22B-Instruct-FP8
ASR Data Derived Speech-to-Text Chat Datatext+audio3,608GPT-OSS 120B
Geometry3ktext+image2,078Qwen3-VL-235B-A22B-Thinking-FP8
VQA-RADtext+image1,270Qwen3.5-122B-A10B
RQAtext+audio959NVIDIA Magpie TTS
HierText OCRQA Qwentext+image514Qwen2.5-VL-32B-Instruct

Training Dataset:

Data Modality

  • Audio
  • Image
  • Text
  • Video

Audio Training Data Size

  • 10,000 to 1 Million Hours

    (267,898,865 audio-containing samples)

Image Training Data Size

  • 1 Million to 1 Billion Images

    (70,143,901 image-containing samples)

Text Training Data Size

  • 1 Billion to 10 Trillion Tokens

    (~717.0B tokens total across all modalities)

Video Training Data Size

  • 10,000 to 1 Million Hours

    (24,557,717 video-containing samples)

Data Collection Method by dataset

  • Hybrid: Human, Automated, Synthetic

Labeling Method by dataset

  • Hybrid: Human, Automated, Synthetic

Properties (Quantity, Dataset Descriptions, Sensor(s)): 354,587,705 total data items across 1395 datasets. The training data spans five modality combinations: text+audio (259,178,821 samples), text+image (70,143,901 samples), text+video (15,837,673 samples), text+video+audio (8,720,044 samples), and text-only (707,187 samples). Content includes publicly available academic datasets, licensed third-party data, NVIDIA-internal collections, and synthetically generated annotations. The data is primarily in English. No sensor-derived data was used.

Evaluation Dataset:

Benchmark Scores:

TaskMultimodal BenchmarksNemotron 3 Nano OmniNemotron Nano VL V2% Improvement
GroundingCVBench2D83.9578.36.73
DocumentOCRBenchV2 (EN)67.0454.818.26
Computer UseOSWorld47.411.176.58
Chart ReasoningCharxiv Reasoning63.641.335.06
Multi-Image ReasoningMMlongBench Doc57.53833.91
Math ReasoningMathVista_MINI82.875.58.82
OCR ReasoningOCR_Reasoning54.1433.933.87
Video Q/AVideo MME72.2--
Video + Audio Q/AWorld Sense55.4--
Video + Audio Q/ADaily Omni74.52--
Speech Instruction FollowingVoice interaction89.39--

Quantization Benchmark Scores:

We release FP8 and NVFP4 quantized variants alongside the BF16 model. The FP8 variant quantizes every linear layer in the language model to per-tensor E4M3 (with the exception of the MoE router and lm_head) and pairs it with an FP8 KV cache, yielding 8.5 effective bits per weight (32.8 GB). The NVFP4 variant uses a mixed-precision recipe inspired by Nemotron 3 Super: routed MoE experts are quantized to NVFP4 (FP4 E2M1 values with per-block FP8 E4M3 scales over groups of 16 elements and an additional per-tensor FP32 global scale), while the Mamba in_proj / out_proj, shared experts, and attention o_proj are quantized to FP8, yielding 4.98 effective bits per weight (20.9 GB). In both variants the vision and audio encoders and their MLP projectors are kept in BF16.

The table below reports FP8 & NVFP4 accuracy against a BF16 baseline using non-reasoning mode. Across 9 multimodal benchmarks, both quantized variants stay within 1 point of BF16 on average.

FootprintBF16FP8NVFP4
Size (GB)61.532.820.9
Effective bpw16.008.54.98
BenchmarkBF16FP8NVFP4
MathVista_MINI71.9071.0571.30
Charxiv Reasoning49.1048.0547.95
MMlongBench Doc46.1045.8445.78
OCRBenchV2 (EN)65.8065.6365.77
CVBench2D84.2085.6285.27
Video MME70.8069.4069.60
Daily Omni74.5074.0674.23
World Sense55.2054.4054.60
MMAU74.6274.5674.34
Tedium Long (WER↓)3.113.123.04
HF-ASR (WER↓)5.955.975.95
Mean (9 non-ASR)65.8065.4065.43
Median (9 non-ASR)70.8069.4069.60
Δ vs BF16 (mean)***−0.40−0.38

Data Collection Method by dataset:

  • Hybrid: Human, Automated — Evaluation benchmarks are primarily human-curated public academic datasets with automated scoring.

Labeling Method by dataset:

  • Human

Properties (Quantity, Dataset Descriptions, Sensor(s)): 14 evaluation benchmarks spanning image understanding (MathVistaMini, Charxiv Reasoning, MMLongBench-Doc, OCR Reasoning, OCRBenchV2 English, CVBench2D, OSWorld), video understanding (Video MME), audio/speech understanding (VoiceBench, Tedium Long, HF-ASR, MMAU, World Sense), and multimodal omni-understanding (Daily Omni). All benchmarks are publicly available academic datasets in English.

Prior to training this model, NVIDIA implemented measures to respect EU text and data mining opt-outs by (1) respecting robots.txt instructions to the extent such signals reflect valid rights reservations, and (2) filtering datasets on any actionable metadata identifiers provided by rightsholders.

Inference:

Acceleration Engine: vLLM

Test Software: vLLM

Test Hardware:

  • NVIDIA H100 x2

Best Practices

We recommend following settings for reaching the optimal performance.

Sampling Parameters

We suggest the following sampling parameters based on the mode and tasks.

  • Thinking mode for long document analysis and multimodal reasoning tasks:

    temperature=0.5-0.7, top_p=0.95, grace_period=1024, reasoning_budget=16384, max_token=20480, and max_model_len=210000
  • Instruct mode (non-thinking) for general tasks:

    temperature=0.2, top_k=1
  • For ASR tasks, we recommend non-thinking mode with
    temperature=0.2, top_k=1

Model output length

For most multimodel reasoning tasks, we recommend using output length of at least 20480. For complex reasoning questions especially in math and programing increasing the maximum output length to 131072 tokens can give the model enough room to produce more detailed and correct answers. We also found the proposed Budget-Controlled Reasoning effectiveness in answering complex reasoning questions.

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.

For more detailed information on ethical considerations for this model, please see the Model Card++ Bias, Explainability, Safety & Security, and Privacy Subcards.

Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.

country_code