Kimi-K2.5
Description
Kimi K2.5 is an open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop Kimi-K2-Base. It seamlessly integrates vision and language understanding with advanced agentic capabilities, supporting both instant and thinking modes, as well as conversational and agentic paradigms.
This model is ready for commercial/non-commercial use.
Third-Party Community Consideration:
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA Kimi-K2.5 Model Card
License and Terms of Use:
GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Open Model License Agreement. Additional Information: Modified MIT License.
Deployment Geography:
Global
Use Case:
Use Case: Designed for developers and enterprises building multi-modal AI agents for scenario-specific automation, visual analysis applications, advanced web development with autonomous image search and layout iteration, coding assistance, and tool-augmented agentic workflows.
Release Date:
Build.NVIDIA.com: 01/26/2026 via link
Huggingface: 01/26/2026 via link
Reference(s):
References:
Model Architecture:
Architecture Type: Transformer
Network Architecture: Mixture-of-Experts (MoE)
Total Parameters: 1T
Activated Parameters: 32B
Number of Layers: 61 (including 1 Dense layer)
Attention Hidden Dimension: 7168
MoE Hidden Dimension (per Expert): 2048
Number of Attention Heads: 64
Number of Experts: 384
Selected Experts per Token: 8
Number of Shared Experts: 1
Vocabulary Size: 160K
Attention Mechanism: MLA (Multi-head Latent Attention)
Activation Function: SwiGLU
Vision Encoder: MoonViT
Vision Encoder Parameters: 400M
Input:
Input Types: Image, Video, Text
Input Formats: Red, Green, Blue (RGB), String
Input Parameters: Two-Dimensional (2D), One-Dimensional (1D)
Other Input Properties: Supports image, video, PDF, and text inputs. Video input is experimental. Visual features are compressed via spatial-temporal pooling before projection into the LLM.
Input Context Length: 256K tokens
Key Capabilities
- Native Multimodality: Pre-trained on vision-language tokens, excels in visual knowledge, cross-modal reasoning, and agentic tool use grounded in visual inputs
- Coding with Vision: Generates code from visual specifications (UI designs, video workflows) and autonomously orchestrates tools for visual data processing
- Agent Swarm: Transitions from single-agent scaling to a self-directed, coordinated swarm-like execution scheme; decomposes complex tasks into parallel sub-tasks executed by dynamically instantiated, domain-specific agents
- Multi-modal Agents: Building general agents tailored for unique, scenario-specific automation
- Advanced Web Development: Using image search tools to autonomously find assets and refine dynamic layouts
- Visual Analysis: High-level comprehension and reasoning for image and video data
- Complex Tool Use: Agentic search and tool-augmented workflows
Output:
Output Types: Text
Output Format: String
Output Parameters: One-Dimensional (1D)
Other Output Properties: Generates text responses based on multi-modal inputs including reasoning, analysis, and code generation. Supports both Thinking mode (with reasoning traces) and Instant mode.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Software Integration:
Runtime Engines:
- vLLM
- SGLang
- KTransformers
Supported Hardware:
- NVIDIA Hopper: H100, H200
Preferred Operating Systems: Linux
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
Model Version(s)
Kimi K2.5 v1.0
Training, Testing, and Evaluation Datasets:
Training Dataset
Data Modality: Image, Text, Video
Training Data Collection: Approximately 15 trillion mixed visual and text tokens
Training Labeling: Undisclosed
Training Properties: Continual pretraining on Kimi-K2-Base
Testing Dataset
Testing Data Collection: Undisclosed
Testing Labeling: Undisclosed
Testing Properties: Undisclosed
Evaluation Dataset
Evaluation Data Collection: Automated
Evaluation Labeling: Human
Evaluation Properties: Evaluated using Kimi Vendor Verifier on standard multi-modal benchmarks. Results reported with Thinking mode enabled, temperature=1.0, top-p=0.95, context length 256K tokens.
Evaluation Benchmark Scores
| Benchmark | Kimi K2.5 (Thinking) | GPT-5.2 (xhigh) | Claude 4.5 Opus (Extended Thinking) | Gemini 3 Pro (High Thinking Level) | DeepSeek V3.2 (Thinking) | Qwen3-VL- 235B-A22B- Thinking | |
|---|---|---|---|---|---|---|---|
| Reasoning & Knowledge | |||||||
| HLE-Full | 30.1 | 34.5 | 30.8 | 37.5 | 25.1† | - | |
| HLE-Full (w/ tools) | 50.2 | 45.5 | 43.2 | 45.8 | 40.8† | - | |
| AIME 2025 | 96.1 | 100 | 92.8 | 95.0 | 93.1 | - | |
| HMMT 2025 (Feb) | 95.4 | 99.4 | 92.9* | 97.3* | 92.5 | - | |
| IMO-AnswerBench | 81.8 | 86.3 | 78.5* | 83.1* | 78.3 | - | |
| GPQA-Diamond | 87.6 | 92.4 | 87.0 | 91.9 | 82.4 | - | |
| MMLU-Pro | 87.1 | 86.7* | 89.3* | 90.1 | 85.0 | - | |
| Vision & Video | |||||||
| MMMU-Pro | 78.5 | 79.5* | 74.0 | 81.0 | - | 69.3 | |
| CharXiv (RQ) | 77.5 | 82.1 | 67.2* | 81.4 | - | 66.1 | |
| MathVision | 84.2 | 83.0 | 77.1* | 86.1* | - | 74.6 | |
| MathVista (mini) | 90.1 | 82.8* | 80.2* | 89.8* | - | 85.8 | |
| ZeroBench | 9 | 9* | 3* | 8* | - | 4* | |
| ZeroBench (w/ tools) | 11 | 7* | 9* | 12* | - | 3* | |
| OCRBench | 92.3 | 80.7* | 86.5* | 90.3* | - | 87.5 | |
| OmniDocBench 1.5 | 88.8 | 85.7 | 87.7* | 88.5 | - | 82.0* | |
| InfoVQA (val) | 92.6 | 84* | 76.9* | 57.2* | - | 89.5 | |
| SimpleVQA | 71.2 | 55.8* | 69.7* | 69.7* | - | 56.8* | |
| WorldVQA | 46.3 | 28.0 | 36.8 | 47.4 | - | 23.5 | |
| VideoMMMU | 86.6 | 85.9 | 84.4* | 87.6 | - | 80.0 | |
| MMVU | 80.4 | 80.8* | 77.3 | 77.5 | - | 71.1 | |
| MotionBench | 70.4 | 64.8 | 60.3 | 70.3 | - | - | |
| VideoMME | 87.4 | 82.1 | - | 88.4* | - | 79.0 | |
| LongVideoBench | 79.8 | 76.5 | 67.2 | 77.7* | - | 65.6* | |
| LVBench | 75.9 | - | - | 73.5* | - | 63.6 | |
| Coding | |||||||
| SWE-Bench Verified | 76.8 | 80.0 | 80.9 | 76.2 | 73.1 | - | |
| SWE-Bench Pro | 50.7 | 55.6 | 55.4* | - | - | - | |
| SWE-Bench Multilingual | 73.0 | 72.0 | 77.5 | 65.0 | 70.2 | - | |
| Terminal Bench 2.0 | 50.8 | 54.0 | 59.3 | 54.2 | 46.4 | - | |
| PaperBench | 63.5 | 63.7* | 72.9* | - | 47.1 | - | |
| CyberGym | 41.3 | - | 50.6 | 39.9* | 17.3* | - | |
| SciCode | 48.7 | 52.1 | 49.5 | 56.1 | 38.9 | - | |
| OJBench (cpp) | 57.4 | - | 54.6* | 68.5* | 54.7* | - | |
| LiveCodeBench (v6) | 85.0 | - | 82.2* | 87.4* | 83.3 | - | |
| Long Context | |||||||
| Longbench v2 | 61.0 | 54.5* | 64.4* | 68.2* | 59.8* | - | |
| AA-LCR | 70.0 | 72.3* | 71.3* | 65.3* | 64.3* | - | |
| Agentic Search | |||||||
| BrowseComp | 60.6 | 65.8 | 37.0 | 37.8 | 51.4 | - | |
| BrowseComp (w/ctx manage) | 74.9 | 57.8 | 59.2 | 67.6 | - | ||
| BrowseComp (Agent Swarm) | 78.4 | - | - | - | - | - | |
| WideSearch (iter-f1) | 72.7 | - | 76.2* | 57.0 | 32.5* | - | |
| WideSearch (iter-f1 Agent Swarm) | 79.0 | - | - | - | - | - | |
| DeepSearchQA | 77.1 | 71.3* | 76.1* | 63.2* | 60.9* | - | |
| FinSearchCompT2&T3 | 67.8 | - | 66.2* | 49.9 | 59.1* | - | |
| Seal-0 | 57.4 | 45.0 | 47.7* | 45.5* | 49.5* | - | |
Inference
Acceleration Engine: vLLM
Test Hardware: H200
Inference Modes
- Thinking Mode: Includes reasoning traces with
reasoning_contentin response. Recommended temperature=1.0. - Instant Mode: Direct responses without reasoning traces. Recommended temperature=0.6.
Quantization
The model employs native INT4 weight-only quantization (Group size 32, compressed tensors) optimized for Hopper Architecture.
Model Usage
The usage demos below demonstrate how to call our official API.
For third-party API deployed with vLLM or SGLang, please note that :
[!Note]
Chat with video content is an experimental feature and is only supported in our official API for now
The recommended
temperaturewill be1.0for Thinking mode and0.6for Instant mode.The recommended
top_pis0.95To use instant mode, you need to pass
{'chat_template_kwargs': {"thinking": False}}inextra_body.
Chat Completion
This is a simple chat completion script which shows how to call K2.5 API in Thinking and Instant modes.
import openai
import base64
import requests
def simple_chat(client: openai.OpenAI, model_name: str):
messages = [
{'role': 'system', 'content': 'You are Kimi, an AI assistant created by Moonshot AI.'},
{
'role': 'user',
'content': [
{'type': 'text', 'text': 'which one is bigger, 9.11 or 9.9? think carefully.'}
],
},
]
response = client.chat.completions.create(
model=model_name, messages=messages, stream=False, max_tokens=4096
)
print('===== Below is reasoning_content in Thinking Mode ======')
print(f'reasoning content: {response.choices[0].message.reasoning_content}')
print('===== Below is response in Thinking Mode ======')
print(f'response: {response.choices[0].message.content}')
# To use instant mode, pass {"thinking" = {"type":"disabled"}}
response = client.chat.completions.create(
model=model_name,
messages=messages,
stream=False,
max_tokens=4096,
extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
# extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
)
print('===== Below is response in Instant Mode ======')
print(f'response: {response.choices[0].message.content}')
Chat Completion with visual content
K2.5 supports Image and Video input.
The following example demonstrates how to call K2.5 API with image input:
import openai
import base64
import requests
def chat_with_image(client: openai.OpenAI, model_name: str):
url = 'https://huggingface.co/moonshotai/Kimi-K2.5/blob/main/figures/kimi-logo.png'
image_base64 = base64.b64encode(requests.get(url).content).decode()
messages = [
{
'role': 'user',
'content': [
{'type': 'text', 'text': 'Describe this image in detail.'},
{
'type': 'image_url',
'image_url': {'url': f'data:image/png;base64, {image_base64}'},
},
],
}
]
response = client.chat.completions.create(
model=model_name, messages=messages, stream=False, max_tokens=8192
)
print('===== Below is reasoning_content in Thinking Mode ======')
print(f'reasoning content: {response.choices[0].message.reasoning_content}')
print('===== Below is response in Thinking Mode ======')
print(f'response: {response.choices[0].message.content}')
# Also support instant mode if pass {"thinking" = {"type":"disabled"}}
response = client.chat.completions.create(
model=model_name,
messages=messages,
stream=False,
max_tokens=4096,
extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
# extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
)
print('===== Below is response in Instant Mode ======')
print(f'response: {response.choices[0].message.content}')
return response.choices[0].message.content
Interleaved Thinking and Multi-Step Tool Call
K2.5 shares the same design of Interleaved Thinking and Multi-Step Tool Call as K2 Thinking. For usage example, please refer to the K2 Thinking documentation.
Known Limitations
- Model is trained and optimized for Hopper Architecture; Blackwell support is separate NVIDIA development effort
- Native INT4 quantization
- Video input is experimental
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please make sure you have proper rights and permissions for all input image content; if image includes people, personal health information, or intellectual property, the image generated will not blur or maintain proportions of image subjects included.
Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.
