moonshotai / kimi-k2.6

Kimi-K2.6

Description

Kimi-K2.6 is an open-source native multimodal agentic model developed by Moonshot AI. Built on a Mixture-of-Experts (MoE) architecture with 1 trillion total parameters (32B active), it delivers long-horizon coding capabilities across Rust, Go, Python, frontend, and DevOps workflows. The model supports agentic task orchestration scaling to 300 sub-agents executing up to 4,000 coordinated steps, and accepts multimodal inputs including text, images, and video via the MoonViT (400M) vision encoder.

This model is ready for commercial/non-commercial use.

Third-Party Community Consideration:

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA Kimi-K2.6 Model Card

License and Terms of Use:

GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Open Model Agreement. Additional Information: Modified MIT License. Kimi K2.6.

Deployment Geography:

Global

Use Case:

Use Case: Kimi-K2.6 is designed for developers and researchers requiring advanced multimodal agentic AI capabilities. Primary use cases include long-horizon coding workflows (frontend, backend, DevOps, performance optimization), autonomous agent orchestration with proactive background task execution, visual reasoning with image and video inputs, and complex multi-step problem-solving requiring hundreds of sequential tool invocations.

Release Date:

build.nvidia.com: April 29, 2026 via link
Hugging Face: April 29, 2026 via link

Reference(s):

References:

Model Architecture:

Architecture Type: Transformer
Network Architecture: Mixture-of-Experts (MoE)
Total Parameters: 1T
Active Parameters: 32B
Layers: 61 (including 1 dense layer)
Number of Experts: 384
Selected Experts per Token: 8
Shared Experts: 1
Attention Mechanism: MLA (Multi-head Latent Attention)
Attention Hidden Dimension: 7168
MoE Hidden Dimension per Expert: 2048
Attention Heads: 64
Vocabulary Size: 160K
Context Length: 256K
Activation Function: SwiGLU
Vision Encoder: MoonViT (400M parameters)

Input:

Input Types: Text, Image, Video
Input Formats: String, Image (JPEG/PNG), Video frames
Input Parameters: Text: One-Dimensional (1D); Image: Two-Dimensional (2D); Video: Three-Dimensional (3D)
Other Input Properties: Text is tokenized with a 160K-vocabulary tokenizer. Images and video frames are encoded via MoonViT (400M). Supports multi-turn conversations with system prompts, user messages, tool definitions in JSON schema format, and native tool-use orchestration.
Input Context Length (ISL): 256K tokens

Output:

Output Types: Text
Output Format: String
Output Parameters: One Dimensional (1D)
Other Output Properties: Generated text can include structured tool call requests, agent coordination directives, and coding artifacts. Supports JSON-structured outputs for agentic workflows.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engine(s): vLLM

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Blackwell
  • NVIDIA Hopper

Operating Systems: Linux

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Model Version(s)

Kimi-K2.6 (2026)

Training, Testing, and Evaluation Datasets:

Training Dataset

Data Modality: Text, Image, Video
Text Training Data Size: Undisclosed
Training Data Collection: Undisclosed
Training Labeling: Undisclosed
Training Properties: Undisclosed

Testing Dataset

Testing Data Collection: Undisclosed
Testing Labeling: Undisclosed
Testing Properties: Undisclosed

Evaluation Dataset

Evaluation Benchmark Score:

BenchmarkScore
Agentic
HLE-Full w/ tools (Pass@1)54.0%
BrowseComp (Pass@1)83.2%
BrowseComp Agent Swarm (Pass@1)86.3%
SWE-Bench Pro (Resolved)58.6%
SWE-Bench Verified (Resolved)80.2%
SWE-Bench Multilingual (Resolved)76.7%
Terminal-Bench 2.0 (Acc)66.7%
OSWorld-Verified (Acc)73.1%
Coding
LiveCodeBench v6 (Pass@1)89.6%
Reasoning & Knowledge
AIME 2026 (Pass@1)96.4%
HMMT 2026 Feb (Pass@1)92.7%
GPQA Diamond (Pass@1)90.5%
IMO-AnswerBench (Pass@1)86.0%
Vision
MMMU-Pro79.4%
MathVision87.4%
CharXiv Reasoning Questions80.4%

Evaluation Data Collection: Automated
Evaluation Labeling: Human
Evaluation Properties: Evaluated on agentic task completion, coding, mathematical reasoning, and vision benchmarks.

Inference

Acceleration Engine(s): vLLM
Test Hardware: GB200x4

Additional Details

Key Capabilities

1. Long-Horizon Coding
Supports production-level coding tasks in Rust, Go, Python, front-end frameworks, DevOps pipelines, and performance optimization. Transforms natural language prompts and visual mockups into production-ready code.

2. Agentic Orchestration
Scales to 300 parallel sub-agents executing up to 4,000 coordinated steps. Supports 24/7 background autonomous task execution with proactive orchestration.

3. Multimodal Input
Native support for text, images, and video inputs via MoonViT (400M vision encoder). Enables visual-to-code workflows and image-grounded reasoning.

4. Open Orchestration
Compatible with open agent frameworks. Supports function/tool calling with structured JSON schema definitions.

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.

Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.

Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here

country_code