stepfun-ai / step-3-5-flash

Step 3.5 Flash

Description

Step 3.5 Flash is a sparse Mixture-of-Experts (MoE) large language model developed by StepFun, engineered to deliver frontier reasoning and agentic capabilities with exceptional efficiency. Built on 196.81B total parameters with only ~11B active per token, it achieves the reasoning depth of top-tier models while maintaining real-time responsiveness with 100-300 tok/s throughput (peaking at 350 tok/s for coding tasks).

This model is ready for commercial/non-commercial use.

Third-Party Community Consideration:

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA Step 3.5 Flash Model Card

License and Terms of Use:

GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Open Model License Agreement. Additional Information: Apache License, Version 2.0.

Deployment Geography:

Global

Use Case:

Use Case: Developers and enterprises seeking a high-performance open-weight LLM for coding assistants, deep research agents, GUI automation, and complex multi-step reasoning tasks. The model is optimized for DGX Spark deployment with fast inference speeds and is particularly strong at tool-calling and agentic applications.

Key Features:

  • Sparse MoE Efficiency: 196B parameters with only ~11B active per token, combining elite intelligence with 11B-class inference speed
  • MTP-3 Acceleration: 3-way Multi-Token Prediction enables 100-300 tok/s throughput, peaking at 350 tok/s for coding
  • Efficient Long Context: 256K context window using 3:1 Sliding Window Attention ratio for cost-efficient processing
  • Agentic Mastery: 74.4% on SWE-bench Verified, 51.0% on Terminal-Bench 2.0, 88.2 on τ²-Bench

Release Date:

Build.NVIDIA.com: 02/2026 via link

Huggingface: 02/2026 via link

Reference(s):

References:

Model Architecture:

Architecture Type: Transformer

Network Architecture: Mixture-of-Experts

Total Parameters: 196.81B (196B Backbone + 0.81B MTP Head)

Active Parameters: ~11B per token

Vocabulary Size: 128,896

Layers: 45

Hidden Size: 4,096

Experts: 288 routed experts + 1 shared expert (always active), Top-8 selection per token

Attention: 3:1 SWA ratio (three sliding-window layers per full-attention layer), window size 512

Input:

Input Types: Text

Input Formats: String

Input Parameters: One-Dimensional (1D)

Other Input Properties: Supports multi-turn conversations and tool-calling formats.

Input Context Length (ISL): 256,000

Output:

Output Types: Text

Output Format: String

Output Parameters: One-Dimensional (1D)

Other Output Properties: Generates coherent responses for coding, reasoning, and general text generation tasks.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Integrations

Supported inference frameworks include vLLM, SGLang, llama.cpp, and Hugging Face Transformers.

Runtime Engines:

  • vLLM:
  • SGLang:
  • Transformers:

Supported Hardware:

  • NVIDIA Ampere: A100, A10
  • NVIDIA Blackwell: B100, B200
  • NVIDIA Hopper: H100, H200

Preferred Operating Systems: Linux

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Model Version(s)

Step 3.5 Flash v1.0

Training, Testing, and Evaluation Datasets:

Training Dataset

Data Modality: Text

Training Data Collection: Undisclosed

Training Labeling: Undisclosed

Training Properties: Undisclosed

Testing Dataset

Testing Data Collection: Undisclosed

Testing Labeling: Undisclosed

Testing Properties: Undisclosed

Evaluation Dataset

Evaluation Benchmark Score: Step 3.5 Flash achieves frontier-level performance across Agency, Reasoning, and Coding benchmarks. For more information see Detailed Benchmark Comparison Table below.

Evaluation Data Collection: Automated

Evaluation Labeling: Hybrid: Automated, Human

Evaluation Properties: Evaluated on industry-standard benchmarks for coding (SWE-bench Verified, LiveCodeBench-V6, Terminal-Bench 2.0), agentic capabilities (τ²-Bench, BrowseComp, GAIA, xbench-DeepSearch), and mathematical reasoning (AIME 2025, HMMT 2025, IMOAnswerBench).

Detailed Benchmark Comparison Table
BenchmarkStep 3.5 FlashDeepSeek V3.2Kimi K2 Thinking / K2.5GLM-4.7MiniMax M2.1MiMo-V2 Flash
# Activated Params11B37B32B32B10B15B
# Total Params (MoE)196B671B1T355B230B309B
Est. decoding cost (@ 128K context, Hopper GPU**)1.0x (100 tok/s, MTP-3, EP8)6.0x (33 tok/s, MTP-1, EP32)18.9x (33 tok/s, no MTP, EP32)18.9x (100 tok/s, MTP-3, EP8)3.9x (100 tok/s, MTP-3, EP8)1.2x (100 tok/s, MTP-3, EP8)
Agency
τ²-Bench88.280.374.3* / —87.480.2*80.3
BrowseComp50.751.441.5* / 60.652.047.445.4
BrowseComp (w/ Context Manager)69.067.660.2 / 74.967.562.058.3
BrowseComp-ZH66.965.062.3 / 62.3*66.647.8*51.2*
BrowseComp-ZH (w/ Context Manager)73.7— / —
GAIA (no file)84.575.1*75.6 / 75.961.9*64.3*78.2*
xbench-DeepSearch (2025.05)83.778.0*76.0 / 76.772.0*68.7*69.3*
xbench-DeepSearch (2025.10)56.355.7*— / 40+52.3*43.0*44.0*
ResearchRubrics65.355.8*56.2 / 59.562.0*60.2*54.3*
Reasoning
AIME 202597.393.194.5 / 96.195.783.094.1 (95.1*)
HMMT 2025 (Feb.)98.492.589.4 / 95.497.171.0*84.4 (95.4*)
HMMT 2025 (Nov.)94.090.289.2* / —93.574.3*91.0*
IMOAnswerBench85.478.378.6 / 81.882.060.4*80.9*
Coding
LiveCodeBench-V686.483.383.1 / 85.084.980.6 (81.6*)
SWE-bench Verified74.473.171.3 / 76.873.874.073.4
Terminal-Bench 2.051.046.435.7* / 50.841.047.938.5

Notes:

  1. "—" indicates the score is not publicly available or not tested.
  2. "*" indicates the original score was inaccessible or lower than our reproduced, so we report the evaluation under the same test conditions as Step 3.5 Flash to ensure fair comparability.
  3. BrowseComp (with Context Manager): When the effective context length exceeds a predefined threshold, the agent resets the context and restarts the agent loop. By contrast, Kimi K2.5 and DeepSeek-V3.2 used a "discard-all" strategy.
  4. Decoding Cost: Estimates are based on a methodology similar to, but more accurate than, the approach described arxiv.org/abs/2507.19427

Inference

Acceleration Engine: vLLM

Test Hardware: H100x4

Recommended Inference Settings:

  • Temperature: 1.0
  • Top-p: 0.95
  • Top-k: 40

Known Limitations

  1. Token Efficiency. Step 3.5 Flash achieves frontier-level agentic intelligence but currently relies on longer generation trajectories than Gemini 3.0 Pro to reach comparable quality.
  2. Efficient Universal Mastery. We aim to unify generalist versatility with deep domain expertise. To achieve this efficiently, we are advancing variants of on-policy distillation, allowing the model to internalize expert behaviors with higher sample efficiency.
  3. RL for More Agentic Tasks. While Step 3.5 Flash demonstrates competitive performance on academic agentic benchmarks, the next frontier of agentic AI necessitates the application of RL to intricate, expert-level tasks found in professional work, engineering, and research.
  4. Operational Scope and Constraints. Step 3.5 Flash is tailored for coding and work-centric tasks, but may experience reduced stability during distribution shifts. This typically occurs in highly specialized domains or long-horizon, multi-turn dialogues, where the model may exhibit repetitive reasoning, mixed-language outputs, or inconsistencies in time and identity awareness.

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here

country_code