qwen / qwen2.5-coder-32b-instruct

Model Overview

Description:

Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:

  • Significant improvements in code generation, code reasoning and code fixing. Increased training tokens to 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
  • A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
  • Long-context support up to 32K tokens.

This model is ready for commercial/non-commercial use.

Third-Party Community Consideration

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA Qwen2.5-Coder-32B-Instruct Model Card.

License/Terms of Use

Qwen/Qwen2.5-Coder-32B-Instruct is licensed under the Apache 2.0 License

References:

Blog, Github, Technical Report

Model Architecture:

Architecture Type: Transformer

Network Architecture: Qwen2.5-Coder-32B-Instruct

Input:

Input Type(s): Text

Input Format(s): String

Input Parameters: 1D

Output:

Output Type(s): Text

Output Format: String

Output Parameters: 1D

Model Version(s):

Qwen2.5-Coder-32B-Instruct

Training, Testing, and Evaluation Datasets:

Training Dataset:

Link: Unknown

Data Collection Method by dataset: Hybrid: Automated, Human

Labeling Method by dataset: Hybrid: Automated, Synthetic

Properties: The training dataset contains over 5.5 trillion tokens total across 92 programming languages with a mixture ratio of 70% Code, 20% Text, 10% Math, sourced from GitHub repositories, Pull Requests, Commits, Jupyter Notebooks, and Kaggle datasets.

Testing Dataset:

Link: Unknown

Data Collection Method by dataset: Unknown

Labeling Method by dataset: Unknown

Properties: Unknown

Evaluation Dataset:

Link: See evaluation section of the Hugging Face Qwen2.5-Coder-32B-Instruct Model Card

Data Collection Method by dataset: Hybrid: Human, Automated

Labeling Method by dataset: Hybrid: Automated, Human

Properties: The evaluation datasets consist of multiple benchmarks including HumanEval with 164 Python programming tasks, MBPP with 974 programming problems, LiveCodeBench with over 600 coding problems, and additional benchmarks covering code generation, completion, reasoning and debugging capabilities.

Inference:

Engine: TensorRT-LLM

Test Hardware: NVIDIA L40S

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.