Kimi-K2-Instruct-0905
Description
Kimi-K2-Instruct-0905 is the latest, most capable version of Kimi K2, a state-of-the-art Mixture-of-Experts (MoE) language model with 1 trillion total parameters and 32 billion active parameters. It delivers enhanced agentic coding intelligence, improved frontend coding experience, and supports extended context lengths of 256k tokens, enabling long-horizon tasks, tool calling, and chat completion.
This model is ready for commercial use.
Third-Party Community Consideration:
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA Kimi-K2-Instruct-0905 Model Card.
License and Terms of Use:
GOVERNING TERMS: The trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Open Model License Agreement. Additional Information: Modified MIT License.
Deployment Geography:
Deployment Geography: Global
Use Case:
- Agentic coding intelligence
- Real-world coding agent tasks
- Frontend programming
- Long-horizon tasks
- Tool calling
- Chat completion
- General language understanding and generation
Release Date:
Build.NVIDIA.com 09/22/2025 via link
Huggingface 09/05/2024 via link
Reference(s):
References:
Model Architecture:
Architecture Type: Mixture-of-Experts (MoE) language model
Network Architecture: Transformer-based with MLA attention mechanism
Total Parameters: 1T (1 trillion)
Active Parameters: 32B (32 billion)
Vocabulary Size: 160K
Base Model: Kimi K2
Input:
Input Types: Text
Input Formats: Natural language prompts, conversational messages, tool calling requests
Input Parameters: [One-Dimensional (1D)]
Other Input Properties: Max Input Tokens: 256K, Support for tool calling, chat completion, extended context processing
Input Context Length (ISL): 256K tokens
Output:
Output Type: Text
Output Format: Natural language responses, structured tool calls, code generation
Output Parameters: [One-Dimensional (1D)]
Other Output Properties: Max Output Tokens: Configurable, Tool calling capabilities, Code generation, Conversational responses
Output Context Length (OSL): Configurable based on remaining context
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Software Integration:
Runtime Engines: SGLang
Supported Hardware:
- NVIDIA Blackwell/ B200
Operating Systems: Linux
Model Version(s)
Kimi-K2-Instruct-0905
Training, Testing, and Evaluation Datasets:
Training Dataset
Data Modality: Text
Training Data Collection: Undisclosed
Training Labeling: Undisclosed
Training Properties: Pre-trained on large-scale text corpora with mixture-of-experts architecture, enhanced for agentic coding intelligence and tool calling capabilities
Testing Dataset
Testing Data Collection: Undisclosed
Testing Labeling: Undisclosed
Testing Properties: Regular testing on coding benchmarks and agentic intelligence tasks
Evaluation Dataset
Evaluation Benchmark Score: SWE-Bench verified: 69.2 ± 0.63, SWE-Bench Multilingual: 55.9 ± 0.72, Multi-SWE-Bench: 33.5 ± 0.28, Terminal-Bench: 44.5 ± 2.03, SWE-Dev: 66.6 ± 0.72
Evaluation Data Collection: Undisclosed
Evaluation Labeling: Undisclosed
Evaluation Properties: Evaluated on coding benchmarks with mean ± std over five independent runs
Inference
Acceleration Engine: SGLang
Test Hardware: H100
Additional Details
Key features include:
- Enhanced agentic coding intelligence with significant improvements on public benchmarks
- Improved frontend coding experience with advancements in aesthetics and practicality
- Extended context length from 128k to 256k tokens for better long-horizon task support
- Strong tool-calling capabilities with autonomous tool invocation
- Mixture-of-experts architecture with 384 experts and 8 selected experts per token
- MLA attention mechanism with SwiGLU activation function
- Block-fp8 format for efficient storage and deployment
- OpenAI/Anthropic-compatible API available on Moonshot AI platform
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here