nvidia / llama-3_2-nemoretriever-500m-rerank-v2

Model Overview

Description

​​The Llama 3.2 NeMo Retriever Reranking 500M model is optimized for providing a logit score that represents how relevant a document(s) is to a given query. The model was fine-tuned for multilingual, cross-lingual text question-answering retrieval, with support for long documents (up to 8192 tokens). This model was evaluated on 26 languages: English, Arabic, Bengali, Chinese, Czech, Danish, Dutch, Finnish, French, German, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Norwegian, Persian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, and Turkish.

The reranking model is a component in a text retrieval system to improve the overall accuracy. A text retrieval system often uses an embedding model (dense) or a lexical search (sparse) index to return relevant text passages given the input. A reranking model can be used to rerank the potential candidates into a final order. The reranking model has the question-passage pairs as an input and therefore, can process cross attention between the words. It’s not feasible to apply a Ranking model on all documents in the knowledge base, therefore, ranking models are often deployed in combination with embedding models.

This 500m version is pruned from the 1B version - it shares the same architecture overall, but is smaller and faster. Users should expect 90-95% of the accuracy of the 1B version, but with lower latency (as much as 2- 3x improvement) and reduced memory usage.

This model is ready for commercial use.

The Llama 3.2 NeMo Retriever Reranking 500M model is a part of the NeMo Retriever collection of NIM, which provides state-of-the-art, commercially-ready models and microservices, optimized for the lowest latency and highest throughput. It features a production-ready information retrieval pipeline with enterprise support. The models that form the core of this solution have been trained using responsibly selected, auditable data sources. With multiple pre-trained models available as starting points, developers can also readily customize them for their domain-specific use cases, such as information technology, human resource help assistants, and research & development assistants.

License/Terms of use

GOVERNING TERMS: The NIM container is governed by the NVIDIA Software License Agreement and the Product-Specific Terms for NVIDIA AI Products; except for the model which is governed by the NVIDIA Community Model License Agreement.

ADDITIONAL INFORMATION: Llama 3.2 Community License Agreement. Built with Llama.

Intended use

The Llama 3.2 NeMo Retriever Reranking 500M model is most suitable for users who are focused on performance and latency, and want to improve their multilingual retrieval tasks by reranking a set of candidates for a given question.

Model Architecture: Llama-3.2 500M Ranker

Architecture Type: Transformer
Network Architecture: Fine-tuned meta-llama/Llama-3.2-1B

The Llama 3.2 NeMo Retriever Reranking 500M model is a transformer encoder fine-tuned for contrastive learning. We employ bi-directional attention when fine-tuning for higher accuracy. The last embedding output by the decoder model is used with a mean pooling strategy, and a binary classification head is fine-tuned for the ranking task.

Ranking models for text ranking are typically trained as a cross-encoder for sentence classification. This involves predicting the relevancy of a sentence pair (for example, question and chunked passages). The CrossEntropy loss is used to maximize the likelihood of passages containing information to answer the question and minimize the likelihood for (negative) passages that do not contain information to answer the question.

We train the model on public datasets described in the Dataset and Training section.

Input

Input Type: Pair of Texts

Input Format: List of text pairs

Input Parameters: 1D

Other Properties Related to Input: The model was trained on question and answering over text documents from multiple languages. It was evaluated to work successfully with up to a sequence length of 8192 tokens. Longer texts are recommended to be either chunked or truncated.

Output
Output Type: Floats

Output Format: List of floats

Output Parameters: 1D

Other Properties Related to Output: Each the probability score (or raw logits). Users can decide to implement a Sigmoid activation function applied to the logits in their usage of the model.

Software Integration

Runtime: Llama 3.2 NeMo Retriever Reranking 500M NIM

Supported Hardware Microarchitecture Compatibility: NVIDIA Ampere, NVIDIA Hopper, NVIDIA Lovelace

Supported Operating System(s): Linux

Model Version(s)

Llama 3.2 NeMo Retriever Reranking 500M

Short Name: llama-3-2-nemoretriever-rerankqa-500m

Training Dataset & Evaluation

Training Dataset

The development of large-scale public open-QA datasets has enabled tremendous progress in powerful embedding models. However, one popular dataset named MSMARCO restricts ‌commercial licensing, limiting the use of these models in commercial settings. To address this, NVIDIA created its own training dataset blend based on public QA datasets, which each have a license for commercial applications.

Data Collection Method by dataset: Automated, Unknown

Labeling Method by dataset: Automated, Unknown

Properties: This model was trained on 800k samples from public datasets.

Evaluation Results

We evaluate the pipelines on a set of evaluation benchmarks. We applied the ranking model to the candidates retrieved from a retrieval embedding model.

Overall, the pipeline llama-3.2-nv-embedqa-1b-v2 + llama-3-2-nemoretriever-rerankqa-500m provides high BEIR+TechQA accuracy with multilingual and crosslingual support. The llama-3-2-nemoretriever-rerankqa-500m ranking model is 3.5x smaller than the nv-rerankqa-mistral-4b-v3 model.

Data Collection Method by Dataset

DatasetData Collection Method
NQReal Google search queries paired with corresponding Wikipedia articles
HotpotQACollected by a team of NLP researchers at Carnegie Mellon University, Stanford University, and Université de Montréal.
FiQACollection from StackExchange posts in personal finance domain and user-generated content from WorldEconomicForum
TechQACurated from real user questions on technical forums and IBM developer community
MIRACLCollection from Wikipedia articles across 18 different languages
MLQAParallel text extraction from Wikipedia articles in 7 languages
MLDRCollection from Wikipedia and mC4 multilingual corpus with cross-lingual alignment techniques

Labeling Method by Dataset

DatasetLabeling Method
NQCombination of automated processes and human annotators identifying answer spans in Wikipedia articles
HotpotQAManual labeling
FiQACombination of accepted answers from StackExchange and manually annotated sentiment scores for financial texts
TechQAManual curation and labeling by domain experts
MIRACLCombined automated labeling with human verification across 18 languages
MLQAManual alignment and verification of parallel texts across 7 languages
MLDRAutomated labeling from document sections and cross-references

We evaluated the NVIDIA Retrieval QA Embedding Model in comparison to literature open & commercial retriever models on academic benchmarks for question-answering - NQ, HotpotQA and FiQA (Finance Q&A) from BeIR benchmark and TechQA dataset. In this benchmark, the metric used was Recall@5. As described, we need to apply the ranking model on the output of an embedding model.

Open & Commercial Reranker ModelsAverage Recall@5 on NQ, HotpotQA, FiQA, TechQA dataset
llama-3.2-nv-embedqa-1b-v2 + llama-3-2-nemoretriever-rerankqa-500m72.03%
llama-3.2-nv-embedqa-1b-v2 + llama-3.2-nemoretriever-rerankqa-1b-v273.64%
llama-3.2-nv-embedqa-1b-v268.60%
nv-embedqa-e5-v5 + nv-rerankQA-mistral-4b-v375.45%
nv-embedqa-e5-v562.07%
nv-embedqa-e5-v457.65%
e5-large_unsupervised48.03%
BM2544.67%

We evaluated the model’s multilingual capabilities on the MIRACL academic benchmark - a multilingual retrieval dataset, across 15 languages, and on an additional 11 languages that were translated from the English and Spanish versions of MIRACL. The reported scores are based on a custom subsampled version by selecting hard negatives for each query to reduce the corpus size.

Open & Commercial Retrieval ModelsAverage Recall@5 on MIRACL multilingual datasets
llama-3.2-nv-embedqa-1b-v2 + llama-3-2-nemoretriever-rerankqa-500m64.24%
llama-3.2-nv-embedqa-1b-v2 + llama-3.2-nemoretriever-rerankqa-1b-v265.80%
llama-3.2-nv-embedqa-1b-v260.75%
nv-embedqa-mistral-7b-v250.42%
BM2526.51%

We evaluated the cross-lingual capabilities on the academic benchmark MLQA based on 7 languages (Arabic, Chinese, English, German, Hindi, Spanish, Vietnamese). We consider only evaluation datasets when the query and documents are in different languages. We calculate the average Recall@5 across the 42 different language pairs.

Open & Commercial Retrieval ModelsAverage Recall@5 on MLQA dataset with different languages
llama-3.2-nv-embedqa-1b-v2 + llama-3-2-nemoretriever-rerankqa-500m82.27%
llama-3.2-nv-embedqa-1b-v2 + llama-3.2-nemoretriever-rerankqa-1b-v286.83%
llama-3.2-nv-embedqa-1b-v279.86%
nv-embedqa-mistral-7b-v268.38%
BM2513.01%

We evaluated the support of long documents on the academic benchmark Multilingual Long-Document Retrieval (MLDR) built on Wikipedia and mC4, covering 12 typologically diverse languages . The English version has a median length of 2399 tokens and 90th percentile of 7483 tokens using the llama 3.2 tokenizer.

Open & Commercial Retrieval ModelsAverage Recall@5 on MLDR
llama-3.2-nv-embedqa-1b-v2 + llama-3-2-nemoretriever-rerankqa-500m65.39%
llama-3.2-nv-embedqa-1b-v2 + llama-3.2-nemoretriever-rerankqa-1b-v270.69%
llama-3.2-nv-embedqa-1b-v259.55%
nv-embedqa-mistral-7b-v243.24%
BM2571.39%

Properties
The evaluation datasets are based on three MTEB/BEIR TextQA datasets, the TechQA dataset, and MIRACL multilingual retrieval datasets, which are all public datasets. The sizes range between 10,000s up to 5M depending on the dataset.

Inference

Engine: TensorRT
Test Hardware: A100 PCIe/SXM, and A10G

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

For more detailed information on ethical considerations for this model, please see the Model Card++ subcards.

Please report security vulnerabilities or NVIDIA AI Concerns here.