qwen / qwen2.5-7b-instruct

Model Overview

Description:

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

  • Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.
  • Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.
  • Long-context Support up to 128K tokens and can generate up to 8K tokens.
  • Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

This model is ready for commercial/non-commercial use.

Third-Party Community Consideration

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA Qwen2.5-7B-Instruct Model Card.

License/Terms of Use

Qwen/Qwen2.5-7B-Instruct is licensed under the Apache 2.0 License

References:

Blog, Github, Documentation, Technical Report

Model Architecture:

Architecture Type: Transformer

Network Architecture: Qwen2.5-7B-Instruct

Input:

Input Type(s): Text

Input Format(s): String

Input Parameters: 1D

Output:

Output Type(s): Text

Output Format: String

Output Parameters: 1D

Model Version(s):

Qwen2.5-7B-Instruct

Training, Testing, and Evaluation Datasets:

Training Dataset:

Link: Unknown

Data Collection Method by dataset: Unknown

Labeling Method by dataset: Unknown

Properties: The size of the pre-training dataset is expanded from 7 trillion tokens used in Qwen2 to a maximum of 18 trillion tokens.

Testing Dataset:

Link: Unknown

Data Collection Method by dataset: Unknown

Labeling Method by dataset: Unknown

Properties: Unknown

Evaluation Dataset:

Link: See evaluation section of the Hugging Face Qwen2.5-7B-Instruct Model Card

Data Collection Method by dataset: Unknown

Labeling Method by dataset: Unknown

Properties: Unknown

Inference:

Engine: TensorRT-LLM

Test Hardware: NVIDIA L40S

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.