Llama3-ChatQA-1.5-70B Model card
Model Information
Model Summary
Author: NVIDIA
Description
Llama3-ChatQA-1.5 excels at conversational question answering (QA) and retrieval-augmented generation (RAG). Llama3-ChatQA-1.5 is developed using an improved training recipe from ChatQA paper, and it is built on top of Llama-3 base model. Specifically, we incorporate more conversational QA data to enhance its tabular and arithmetic calculation capability. Llama3-ChatQA-1.5 has two variants: Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B.
Terms of Use
By accessing this model, you are agreeing to the NVIDIA AI Foundation Models Community License 
Additional Information: META LLAMA 3 COMMUNITY LICENSE AGREEMENT.
Reference:
@article{liu2024chatqa,
  title={ChatQA: Surpassing GPT-4 on Conversational QA and RAG},
  author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan},
  journal={arXiv preprint arXiv:2401.10225},
  year={2024}}
Resources and Technical Documentation
Model Architecture:
Architecture Type: Transformer Decoder Network 
Network Architecture: Llama-3 
Inputs and outputs
Input:
Input Type(s): Text 
Input Format(s): String 
Input Parameters: One-Dimensional (1D) 
Output:
Output Type(s): Text 
Output Format(s): String 
Output Parameters: One-Dimensional (1D) 
Ethical Considerations (For NVIDIA Models Only):
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards [Insert Link to Model Card++ here]. Please report security vulnerabilities or NVIDIA AI Concerns here.
