Llama-Nemotron-Embed-VL-1B-v2
Description
The Llama-Nemotron-Embed-VL-1B-v2 model is optimized for multimodal question-answering retrieval. The model can embed 'documents' in the form of image, text, or image and text combined. Documents can be retrieved given a user query in text form. The model supports images containing text, tables, charts, and infographics. This model was evaluated on ViDoRe V1 and two internal multimodal retrieval benchmarks.
An embedding model is a crucial component of a retrieval system, because it transforms information into dense vector representations. An embedding model is typically a transformer encoder that processes tokens of input (text or image) (for example: question, passage) to output an embedding. The Llama-Nemotron-Embed-VL-1B-v2 model is a combined language model and vision model.
The Llama-Nemotron-Embed-VL-1B-v2 model is a part of the NVIDIA NeMo Retriever collection of NIM, which provides state-of-the-art, commercially-ready models and microservices optimized for the lowest latency and highest throughput. It features a production-ready information retrieval pipeline with enterprise support. The models that form the core of this solution have been trained using responsibly selected, auditable data sources. With multiple pre-trained models available as starting points, developers can readily customize them for domain-specific use cases, such as information technology, human resource help assistants, and research & development research assistants.
This model is ready for commercial use.
License and Terms of Use:
GOVERNING TERMS: The trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Open Model License. ADDITIONAL INFORMATION: Llama 3.2 Community License Agreement. Built with Llama.
You are responsible for ensuring that your use of NVIDIA provided models complies with all applicable laws.
Deployment Geography:
Global
Use Case:
Use Case: The Llama-Nemotron-Embed-VL-1B-v2 model is most suitable for users who want to build a multimodal question-and-answer application over a large corpus, leveraging the latest dense retrieval technologies.
Release Date:
Build.NVIDIA.com: 02/10/2026 via link
Huggingface: 12/18/2025 via link
Reference(s):
References:
Model Architecture:
Architecture Type: Transformer
Network Architecture: Fine-tuned MultiModal Llama 3.2 1B Retriever
This NeMo Retriever embedding model is a transformer encoder. It is a fine-tuned version of Llama 3.2 1B with SigLip2 400M, with 16 layers and an embedding size of 2048, which is trained on public datasets. Embedding models for text retrieval are typically trained using a bi-encoder architecture. This involves encoding a pair of query and document independently using the embedding model. Contrastive learning is used in this model to maximize the similarity between the query and the document that contains the answer, while minimizing the similarity between the query and sampled negative documents not useful to answer the question.
The vision-language model encoder incorporates key innovations from NVIDIA, including Eagle 2 work and nemoretriever-parse, which use a tiling-based VLM architecture. This architecture, available on Hugging Face, significantly enhances multimodal understanding through its dynamic tiling and mixture of vision encoders design. It particularly improves performance on tasks that involve high-resolution images and complex visual content.
Input:
Input Types: Text (for queries), Text | Image (for documents)
Input Formats: List of strings (for queries), List of strings | List of Images (for documents)
Input Parameters: One Dimensional (1D)
Other Input Properties: The model's maximum context length is 8192 tokens. Texts longer than maximum length must either be chunked or truncated. Images must be 8192 x 16384 or 16384 x 8192 and less than 25MB. They are resized automatically by the NIM.
Output:
Output Types: Floats
Output Format: List of float arrays
Output Parameters: One Dimensional (1D)
Other Output Properties: Model outputs embedding vectors of maximum dimension 2048 for each input.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Software Integration:
Runtime Engines:
- NeMo Retriever Embedding NIM: Primary runtime engine
Supported Hardware:
- NVIDIA Ampere: A100, A6000, A40
- NVIDIA Blackwell: B200, B100, GB200
- NVIDIA Hopper: H100, H200
- NVIDIA Lovelace: L40S, L40, RTX 6000 Ada Generation
Operating Systems: Linux
Additional Testing Statement:
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
Inference
Acceleration Engine: TensorRT
Test Hardware: H100 PCIe/SXM, A100 PCIe/SXM, L40s, L4, and A10G
Model Version(s)
Llama-Nemotron-Embed-VL-1B-v2
Short Name: llama-nemotron-embed-vl-1b-v2
Training, Testing, and Evaluation Datasets:
Training Dataset
Data Modality: Text, Image
Data Sources: Public QA datasets with commercial licensing. The text component is comprised of semi-supervised pre-training on 12M samples from public datasets and fine-tuning on 1.5M samples from public datasets. The VLM component uses only commercially-viable data from the Eagle2 training data.
Data Collection Method: Hybrid: Automated, Human, Synthetic
Labeling Method: Hybrid: Automated, Human, Synthetic
Other Properties: NVIDIA's training dataset is based on public QA datasets, and only includes datasets that have a license for commercial applications.
Evaluation Datasets
Data Modality: Text, Image
Data Sources: ViDoRe V1 benchmark and two internal multimodal retrieval benchmarks. One internal dataset (DigitalCorpora-767) can be created by following instructions in this notebook.
Data Collection Method: Hybrid: Automated, Human, Synthetic
Labeling Method: Hybrid: Automated, Human, Synthetic
Other Properties: DigitalCorpora-767 is a set of 767 PDFs that have a good mixture of text, tables, and charts.
Evaluation Results
We evaluated the NeMo Retriever Multimodal Embedding Model against both published literature and existing open-source and commercial retriever models. Our evaluation used three benchmark datasets for question-answering tasks: the public ViDoRe V1 benchmark and two internal multimodal retrieval benchmarks.
| Model | # Params Vision (in M) | # Params LLM-backbone (in M) | Average Recall@5 on DigitalCorpora-767, Earnings, ViDoRe V1 |
|---|---|---|---|
| llama-nemotron-embed-vl-1b-v2 | 429 | 1236 | 80.9% |
| llamaindex/vdr-2b-multi-v1 | 665 | 1544 | 80.9% |
| MrLight/dse-qwen2-2b-mrl-v1 | 665 | 1544 | 80.4% |
| Alibaba-NLP/gme-Qwen2-VL-2B-Instruct | 665 | 1544 | 79.9% |
We do not compare to col-style embedding (late interaction) models because late interaction embeddings require a significant embedding store.
Detailed Performance Analysis
The model's performance was evaluated across different modalities and compared with other models using various pipelines. The following table contains the detailed results for the DigitalCorpora-767 dataset:
| Modality | Queries | Text-based Pipeline | VLM-based Pipeline (llama-nemotron-embed-vl-1b-v2) |
|---|---|---|---|
| Multimodal | 991 | 0.845 | 0.865 |
| Table | 235 | 0.753 | 0.838 |
| Chart | 268 | 0.881 | 0.881 |
| Text | 488 | 0.869 | 0.869 |
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case, and address unforeseen product misuse.
For more detailed information on ethical considerations for this model, see the Model Card++ subcards: Bias, Explainability, Privacy, and Safety & Security.
Please report security vulnerabilities or NVIDIA AI Concerns here.
