The NVIDIA Retrieval QA Mistral 4B Reranking Model is a model optimized for providing a logit score that represents how relevant a document(s) is to a given query.
The ranking model is a component in a text retrieval system to improve the overall accuracy. A text retrieval system often uses an embedding model (dense) or lexical search (sparse) index to return relevant text passages given the input. A ranking model can be used to rerank the potential candidates into a final order. Ranking model has the question-passage pairs as an input and therefore, can process cross attention between the words. It would not be feasible to apply a Ranking model on all documents in the knowledge base, therefore, ranking models are often deployed in combination with embedding models.
This model is ready for commercial use.
NVIDIA Retrieval QA Mistral 4B Reranking Model is part of the NVIDIA NeMo Retriever, which provides state-of-the-art, commercially-ready models and microservices, optimized for the lowest latency and highest throughput. It features a production-ready information retrieval pipeline with enterprise support. The models that form the core of this solution have been trained using responsibly selected, auditable data sources. With multiple pre-trained models available as starting points, developers can also readily customize them for their domain-specific use cases, such as Information Technology, Human Resource help assistants, and Research & Development research assistants.
The use of this model is governed by the NVIDIA AI Foundation Models Community License Agreement and the Apache License 2.0.
The NVIDIA Retrieval QA Ranking model is most suitable for users who want to improve their retrieval systems by reranking a set of candidates for a given question.
Architecture Type: Transformer
Network Architecture: Fine-tuned Mistral 7B foundation model
The NVIDIA Retrieval QA Ranking Model is a transformer encoder - a LoRA finetuned version of Mistral-7B-v0.1 LLM that uses only the first 16 layers (resulting in a 4B parameters model) for higher throughput. We employ bi-directional attention when finetuning for higher accuracy. The last embedding output by the decoder model is used with a mean pooling strategy, and a binary classification head is fine-tuned for the ranking task.
Ranking models for text ranking are typically trained as a cross-encoder for sentence classification. This involves predicting relevancy of a sentence pair (for example, question and chunked passages). The CrossEntropy loss is used to maximize the likelihood for passages containing information to answer the question and minimize the likelihood for (negative) passages which do not contain information to answer the question.
The model was trained on public datasets described in the Dataset and Training section.
NVIDIA Retrieval QA Text Reranking Mistral 4B v3
Short name: NV-RerankQA-Mistral-4B-v3
The development of large-scale public open-QA datasets has enabled tremendous progress in powerful embedding models. However, one popular dataset named MSMARCO restricts commercial licensing, limiting the use of these models in commercial settings. To address this, we created our own training dataset blend based on public QA datasets, which each have a license for commercial applications.
The training dataset details are as follows:
Use Case: Information retrieval for question and answering over text documents.
Data Sources: Public datasets licensed for commercial use.
Language: English (US)
Volume: 300k samples from public dataset
Data Collection Method by dataset: Unknown
Labeling Method by dataset: Unknown
We evaluated the NVIDIA Retrieval QA Ranking Models in comparison to literature open & commercial retriever models on academic benchmarks for question-answering - NQ, HotpotQA and FiQA(Finance Q&A) from BEIR benchmark and TechQA dataset. In this benchmark, the metric used was Recall@5. As described, we need to apply the ranking model on the output of an embedding model.
Open & Commercial Retrieval Models | Average Recall@5 on NQ, HotpotQA, FiQA, TechQA dataset |
NV-EmbedQA-E5-v5 + NV-RerankQA-Mistral-4B-v3 | 75.45% |
NV-EmbedQA-E5-v5 | 62.07% |
NV-EmbedQA-E5-v4 | 57.65% |
E5-large-unsupervised | 48.03% |
BM25 | 44.67% |
Data Collection Method by dataset: Unknown
Labeling Method by dataset: Unknown
Properties:
The evaluation datasets are based on three MTEB/BEIR TextQA datasets, and the TechQA dataset, which are all public datasets. The size ranges between 10,000s up to 5M depending on the dataset.
Input Type: Pair of Texts
Input Format: List of text pairs
Other Properties Related to Input: The model's maximum context length is 512 tokens. Texts longer than maximum length must either be chunked or truncated.
Output Type: Floats
Output Format: List of float arrays
Other Properties Related to Output: Each the probability score (or raw logits) The user can decide if a Sigmoid activation function is applied to the logits.
Runtime: NeMo Retriever Text Embedding NIM
Supported Hardware Microarchitecture Compatibility: NVIDIA Ampere, NVIDIA Hopper, NVIDIA Lovelace
Supported Operating System(s): Linux
Engine: TensorRT
Test Hardware: See Support Matrix from NIM documentation.
We evaluated the models optimized for different hardware on a small sample dataset of 600 queries.
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ tab for the Explainability, Bias, Safety & Security, and Privacy subcards. Please report security vulnerabilities or NVIDIA AI Concerns here.