GLM-5
Description
GLM-5 is a next-generation large language model targeting complex systems engineering and long-horizon agentic tasks. Scaling from 355B parameters (32B active) in GLM-4.5 to 744B parameters (40B active), GLM-5 increases pre-training data from 23T to 28.5T tokens. The model integrates DeepSeek Sparse Attention (DSA), substantially reducing deployment cost while preserving long-context capacity of 205K tokens. GLM-5 uses a novel asynchronous RL infrastructure called "slime" that substantially improves training throughput and efficiency, enabling more fine-grained post-training iterations. GLM-5 delivers significant improvement compared to GLM-4.7 across a wide range of academic benchmarks and achieves best-in-class performance among all open-source models on reasoning, coding, and agentic tasks.
This model is ready for commercial/non-commercial use.
Third-Party Community Consideration:
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA GLM-5 Model Card
License and Terms of Use:
GOVERNING TERMS: The trial service is governed by the NVIDIA API Trial Terms of Service; and the use of this model is governed by the NVIDIA Open Model License. ADDITIONAL INFORMATION: MIT License.
You are responsible for ensuring that your use of NVIDIA provided models complies with all applicable laws.
Deployment Geography:
Global
Use Case:
Use Case: GLM-5 is designed for complex systems engineering and long-horizon agentic tasks, including advanced reasoning, coding, tool use, web browsing, terminal operations, and multi-step agentic workflows. The model excels in scenarios requiring long-context understanding, complex reasoning, and interactive agentic behavior.
Release Date:
Build.NVIDIA.com: 2/17/2026 via link
Huggingface: 2/11/2026 via link
Reference(s):
References:
- GLM-5 Technical Blog
- GLM-5 Model Card
- Z.ai API Platform
- GLM-5 Chat Interface
- Slime RL Infrastructure
Model Architecture:
Architecture Type: Transformer
Network Architecture: Mixture of Experts (MoE) with DeepSeek Sparse Attention (DSA)
Total Parameters: 744B (40B active)
Pre-training Tokens: 28.5T
GLM-5 uses a Mixture of Experts architecture with DeepSeek Sparse Attention to reduce deployment costs while maintaining long-context capacity. The model employs an asynchronous RL infrastructure called "slime" for efficient post-training.
Input:
Input Types: Text
Input Formats: String
Input Parameters: One Dimensional (1D)
Other Input Properties: Supports multi-turn conversations with system prompts, user messages, and assistant responses. Supports thinking mode and tool calling. Maximum context length: 205K tokens.
Output:
Output Types: Text
Output Format: String
Output Parameters: One Dimensional (1D)
Other Output Properties: Supports structured JSON output, function/tool calling, and reasoning content. Output can include explicit "thinking" traces when enabled.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Software Integration:
Runtime Engines:
- SGLang
Supported Hardware:
- NVIDIA Blackwell
- NVIDIA Hopper
Operating Systems: Linux
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
Inference
Acceleration Engine: SGLang
Test Hardware:
NVIDIA B200
Model Version(s)
GLM-5
Short Name: glm-5
Training, Testing, and Evaluation Datasets:
Training Dataset
Data Modality: Text
Text Training Data Size: [More than 10 Trillion Tokens]
Training Data Collection: Undisclosed
Training Labeling: Undisclosed
Training Properties: Pre-trained on 28.5T tokens. Post-training uses asynchronous RL infrastructure ("slime") for efficient reinforcement learning at scale.
Testing Dataset
Testing Data Collection: Undisclosed
Testing Labeling: Undisclosed
Testing Properties: Undisclosed
Evaluation Dataset
Evaluation Benchmark Score: Evaluated on multiple benchmarks including HLE, AIME 2026, GPQA-Diamond, SWE-bench, Terminal-Bench 2.0, BrowseComp, τ²-Bench, CyberGym, MCP-Atlas, Tool-Decathlon, and Vending Bench 2.
Evaluation Data Collection: Automated
Evaluation Labeling: Human
Evaluation Properties: GLM-5 achieves best-in-class performance among all open-source models on reasoning, coding, and agentic tasks, closing the gap with frontier models.
Evaluation Results
We evaluated GLM-5 across multiple benchmark datasets:
| Benchmark | GLM-5 | GLM-4.7 | DeepSeek-V3.2 |
|---|---|---|---|
| HLE | 30.5 | 24.8 | 25.1 |
| HLE (w/ Tools) | 50.4 | 42.8 | 40.8 |
| AIME 2026 I | 92.7 | 92.9 | 92.7 |
| GPQA-Diamond | 86.0 | 85.7 | 82.4 |
| SWE-bench Verified | 77.8 | 73.8 | 73.1 |
| SWE-bench Multilingual | 73.3 | 66.7 | 70.2 |
| Terminal-Bench 2.0 | 56.2 / 60.7 | 41.0 | 39.3 |
| BrowseComp | 62.0 | 52.0 | 51.4 |
| τ²-Bench | 89.7 | 87.4 | 85.3 |
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case, and address unforeseen product misuse.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here
