mistralai / mistral-medium-3.5-128b

Mistral Medium 3.5 128B

Description

Mistral Medium 3.5 is Mistral AI's first flagship merged model. It is a dense 128B model with a 256k context window, handling instruction-following, reasoning, and coding in a single set of weights. Mistral Medium 3.5 replaces its predecessor Mistral Medium 3.1 and Magistral in Le Chat. It also replaces Devstral 2 in our coding agent Vibe. Concretely, expect better performance for instruct, reasoning and coding tasks in a new unified model in comparison with our previous released models.

Reasoning effort is configurable per request, so the same model can answer a quick chat reply or work through a complex agentic run. They trained the vision encoder from scratch to handle variable image sizes and aspect ratios.

This model is ready for commercial/non-commercial use.

Third-Party Community Consideration:

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA Mistral Medium 3.5 128B Model Card.

License and Terms of Use:

GOVERNING TERMS: The trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the Modified MIT license.

Deployment Geography:

Global

Use Case:

Use Case: Designed for advanced chat, coding assistance, reasoning-intensive tasks, multimodal image understanding, and agentic workflows that benefit from function calling, JSON output, and long-context processing.

Release Date:

Build.NVIDIA.com: 04/29/2026 via link

Huggingface: 04/29/2026 via link

Reference(s):

References:

Model Architecture:

Architecture Type: Transformer
Network Architecture: Mistral (dense 128B language model with vision encoder)
Total Parameters: 128B

Input:

Input Types: Text, Image
Input Formats: String, Red, Green, Blue (RGB)
Input Parameters: One-Dimensional (1D), Two-Dimensional (2D)
Other Input Properties: Supports multilingual text input in English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, and Arabic, plus image input with variable image sizes and aspect ratios.
Input Context Length (ISL): 262,144 (256k)

Output:

Output Types: Text
Output Format: String
Output Parameters: One-Dimensional (1D)
Other Output Properties: Supports native function calling, JSON output, configurable reasoning effort for quick replies or deeper reasoning runs, and strong system prompt adherence.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engines

  • vLLM
  • SGLang
  • Transformers
  • llama.cpp
  • LM Studio

Supported Hardware:

  • NVIDIA Ampere: A100
  • NVIDIA Blackwell: B100, B200, GB200
  • NVIDIA Hopper: H100, H200

Operating Systems: Linux

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Model Version(s)

Mistral Medium 3.5 128B v3.5

Training, Testing, and Evaluation Datasets:

Training Dataset

Data Modality: Image, Text
Image Training Data Size: Undisclosed
Text Training Data Size: Undisclosed
Training Data Collection: Undisclosed
Training Labeling: Undisclosed
Training Properties: Undisclosed

Testing Dataset

Testing Data Collection: Undisclosed
Testing Labeling: Undisclosed
Testing Properties: Undisclosed

Evaluation Dataset

Evaluation Data Collection: Undisclosed
Evaluation Labeling: Undisclosed
Evaluation Properties: Undisclosed

Inference

Acceleration Engine: vLLM
Test Hardware: NVIDIA H100

Additional Details

Recommended Deployment Settings

Use reasoning_effort="high" for complex prompts and agentic coding tasks. Recommended temperature settings are 0.7 for reasoning_effort="high" and 0.0 to 0.7 for reasoning_effort="none" depending on the task.

Deployment Options

Supported deployment options include vLLM, llama.cpp, LM Studio, SGLang, and Transformers.

Mistral Vibe Integration

Mistral Vibe support includes a local vLLM configuration path with a dedicated system prompt, a local model alias, and a configurable local server endpoint.

For more information, please refer to the Mistral Vibe README.

Vision and Agentic Capabilities

The model supports system prompts, multimodal image analysis, native function calling, JSON output, and agentic workflows. The vision encoder is designed to handle variable image sizes and aspect ratios.

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please make sure you have proper rights and permissions for all input image content; if image includes people, personal health information, or intellectual property, the image generated will not blur or maintain proportions of image subjects included.

Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.

country_code