nvidia / rerank-qa-mistral-4b

Model Overview

Description

The NVIDIA Retrieval QA Ranking Models is a model optimized for providing a probability score that a given passage contains the information to answer a question. The ranking model is a component in a text retrieval system to improve the overall accuracy. A text retrieval system often uses an embedding model (dense) or lexical search (sparse) index to return relevant text passages given the input. A ranking model can be used to rerank the potential candidate into a final order. Ranking model has the query-passage pairs as an input and therefore, can process cross attention between the words. It would not be feasible to apply a Ranking model on all documents in the knowledge base, therefore, ranking models are often deployed in combination with embedding models.

NVIDIA Retrieval QA Ranking Model is a part of NVIDIA NeMo Retriever, which provides state-of-the-art, commercially-ready models and microservices, optimized for the lowest latency and highest throughput. It features a production-ready information retrieval pipeline with enterprise support. The models that form the core of this solution have been trained using responsibly selected, auditable data sources. With multiple pre-trained models available as starting points, developers can also readily customize them for their domain-specific use cases, such as Information Technology, Human Resource help assistants, and Research & Development research assistants.

Terms of use

The use of this model is governed by
the NVIDIA NeMo Foundational Models Evaluation License Agreement

References(s)

N/A

Model Architecture: Mistral-4B Ranker

Architecture Type: Transformer

Network Architecture: Fine-tuned Mistral-7B-v0.1 LLM (only first 16 layers)

The NVIDIA Retrieval QA Ranking Model is a transformer encoder - a LoRA finetuned version of Mistral-7B-v0.1 LLM that uses only the first 16 layers for higher throughput. The last embedding output by the decoder model is used as a pooling strategy, and a binary classification head is fine-tuned for the ranking task.

Ranking models for text ranking are typically trained using a cross-encoder architecture for sentence classification. This involves predicting a pair of sentences (for example, query and chunked passages). The Binary CrossEntropy loss is used to maximize the likelihood for passages containing information to answer the query and minimize the likelihood for passages which do not contain information to answer the query.
We train the model on private and public datasets described in the Dataset and Training section. The model currently supports a maximum input of 512 tokens.

Input

Input Type: Pair of texts

Input Format: list of text pairs

Output

Output Type: floats

Output Format: list of floats, each the probability score (or raw logits). The user can decide if a Sigmoid activation function is applied to the logits.

Model Version(s)

NVIDIA Retrieval QA Text Reranking Mistral 4B-1

Training Dataset & Evaluation

Training Dataset

The development of large-scale public open-QA datasets has enabled tremendous progress in powerful embedding models. However, one popular dataset named MSMARCO restricts ‌commercial licensing, limiting the use of these models in commercial settings. To address this, we created our own internal open-domain QA dataset to train a commercially-viable text qa models. For NVIDIA proprietary data collection, we searched the passages from web logs and selected a collection of passages relevant to customer use cases for annotation by the NVIDIA internal data annotation team.

The training dataset details are as follows:

Use Case: Information retrieval for question and answering over text documents.

Data Sources:

  • Public datasets licensed for commercial use.
  • Text from public websites.
  • Annotations created by NVIDIA’s internal team.

Language: English (US)

Domains: Knowledge, Description, Numeric (unit, time), Entity, Location, Person

Volume: 400k samples from public dataset

High Level Schema:

  • query: question text
  • doc: full document that contains the answer
  • chunk: section of the document that contains the answer
  • relevancy label: rating of how relevant the passage is to the question
  • span: exact token range in the chunk that contains the answer

Evaluation Results

We evaluated the NVIDIA Retrieval QA Ranking Models in comparison to literature open & commercial retriever models on academic benchmarks - NQ, HotpotQA and FiQA(Finance Q&A) from BeIR benchmark. In this benchmark, the metric used was Recall@5. As described, we need to apply the ranking model on the output of a embedding model.

Open & Commercial Retrieval ModelsAverage Recall@5 on NQ, HotpotQA, FiQA dataset
NVIDIA Retrieval QA Embedding + NVIDIA Retrieval QA Ranking (Mistral-4B)70.60%
NVIDIA Retrieval QA Embedding55.95%
E5-Large_unsupervised47.57%

We also evaluated our embedding model with real internal customer datasets from telco, IT, consulting, and energy industries. The metric was Recall@5, to emulate a retrieval augmented generation (RAG) scenario where we would provide the top five most relevant passages as context in the prompt for the LLM model that is going to respond to the question. We compared our model’s information retrieval accuracy to a number of well-known embedding models made available by the AI community, including ones trained on non-commercial dataset (which are marked with "*").

Retrieval ModelAverage Recall@5 on Internal Customer Datasets
NVIDIA Retrieval QA Embedding + NVIDIA Retrieval QA Ranking79.22%
NVIDIA Retrieval QA74.3%
DRAGON*72.7%
E5-Large*71.7%
BGE*71.1%
GTR*71.0%
Contriever*69.0%
GTE*63.9%
E5-Large_unsupervised61.6%
BM2555.6%

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards here. Please report security vulnerabilities or NVIDIA AI Concerns here.

Intended use

The NVIDIA Retrieval QA Ranking model is most suitable for users who want to improve their retrieval systems by reranking a set of candidates for a given question.

Ethical use: Technology can have a profound impact on people and the world, and NVIDIA is committed to enabling trust and transparency in AI development. NVIDIA encourages users to adopt principles of AI ethics and trustworthiness to guide your business decisions by following the guidelines in the NVIDIA AI Foundation Models Community License Agreement.

Limitations

The model was trained on the data that may contain toxic language and societal biases originally crawled from the Internet. Therefore, the model may amplify those biases, for example, associating certain genders with certain social stereotypes.