mistralai / mistral-large-3-675b-instruct-2512

Mistral Large 3 675B Instruct 2512

Description

Mistral Large 3 675B Instruct 2512 is a state-of-the-art general-purpose multimodal granular Mixture-of-Experts model with 41B active parameters and 675B total parameters, trained from the ground up with 3000 H200s. This instruct post-trained version in FP8 precision is fine-tuned for instruction tasks, making it ideal for chat, agentic, and instruction-based use cases. Designed for reliability and long-context comprehension, it is engineered for production-grade assistants, retrieval-augmented systems, scientific workloads, and complex enterprise workflows.

This model is ready for commercial/non-commercial use.

Third-Party Community Consideration:

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA Mistral Large 3 675B Instruct 2512 Model Card

License and Terms of Use:

GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Community Model License. Additional Information: Apache License Version 2.0.

Deployment Geography:

Global

Use Case:

Use Case: Designed for enterprise-grade applications including long document understanding, powerful daily-driver AI assistants, state-of-the-art agentic and tool-use capabilities, enterprise knowledge work, and general coding assistance. Engineered for production-grade assistants, retrieval-augmented systems, scientific workloads, and complex enterprise workflows with powerful long-context performance and stable cross-domain behavior.

Release Date:

Build.NVIDIA.com: 12/2025 via link
Huggingface: 12/2025 via link

Reference(s):

References:

Model Architecture:

Architecture Type: Transformer
Network Architecture: Granular Mixture-of-Experts (MoE) with Vision Encoder (673B Language Model + 2.5B Vision Encoder)
Total Parameters: 675B
Active Parameters: 41B (39B language model active parameters + 2.5B vision encoder)
Base Model: mistralai/Mistral-Large-3-675B-Base-2512

Input:

Input Types: Image, Text
Input Formats: Red, Green, Blue (RGB), String
Input Parameters: Two Dimensional (2D), One Dimensional (1D)
Other Input Properties: Supports multimodal input with vision capabilities for image analysis. Images should maintain aspect ratio close to 1:1 (width-to-height) for optimal performance. Text inputs support multilingual content (English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic). Recommended system prompt configuration available in repository. Supports tools/function calling with recommendation to keep tool set well-defined and limited.
Input Context Length (ISL): 262,144 (256k)

Output:

Output Types: Text
Output Format: String
Output Parameters: One Dimensional (1D)
Other Output Properties: Supports native function calling and JSON output formatting. Best results achieved with temperature below 0.1 for daily-driver and production environments. Strong system prompt adherence. Best-in-class agentic capabilities with tool use.
Output Context Length (OSL): Undisclosed

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engines:

  • vLLM: Latest version (recommended, install from nightly wheels)
  • Transformers: Not yet available (community contribution welcome)

Supported Hardware:

  • NVIDIA Ampere: A100 (single node, NVFP4)
  • NVIDIA Blackwell: B200 (single node, FP8)
  • NVIDIA Hopper: H100 (single node, NVFP4), H200 (single node, FP8)

Operating Systems: Linux

Additional Testing Statement:
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Model Version(s)

v1.0 (December 2025)

Training, Testing, and Evaluation Datasets:

Training Dataset

Data Modality: Undisclosed
Training Data Collection: Undisclosed
Training Labeling: Undisclosed
Training Properties: Undisclosed

Testing Dataset

Testing Data Collection: Undisclosed
Testing Labeling: Undisclosed
Testing Properties: Undisclosed

Evaluation Dataset

Evaluation Benchmark Score: Benchmark results are provided in the Mistral Large 3 675B Instruct 2512 Model Card comparing Mistral Large 3 675B to similar sized models across text and vision tasks. Specific scores vary by benchmark.

View Detailed Benchmark Results

Base Model Performance

MetricMedium 3ML3 (675B params)Deepseek-3.1 (671B)Kimi-K2 (1.2T params)
MMMLU (8-lang average)81.7485.4684.2283.45
GPQA-Diamond 5-shot (no CoT)39.3943.9441.935.6
SimpleQA Exact match15.0723.7919.6926.02
AMC32.85246.454.4
LiveCodeBench (no CoT)29.2934.4135.6340.19

Instruct Model Performance

MetricML3DS3.1ML3Kimi-K2
General Prompts Surge55475545
Multilingual Prompts Surge60436040

Note: Bold values indicate best performance for each metric. Benchmark results demonstrate competitive performance across language understanding, reasoning, coding, and multilingual tasks.

Evaluation Data Collection: Automated
Evaluation Labeling: Automated
Evaluation Properties: Standard industry benchmarks for text and vision tasks. Complete benchmark results available in the source model card linked above.

Inference

Acceleration Engine: Other (vLLM with mistral-common tokenizer)
Test Hardware: Deployable on-premises on a single node using FP8 quantization (NVIDIA B200, H200) or NVFP4 quantization (NVIDIA H100, A100). Full BF16 deployment requires multi-node configuration.

Additional Details

Known Limitations

  • Not a dedicated reasoning model: Dedicated reasoning models can outperform Mistral Large 3 in strict reasoning use cases.
  • Behind vision-first models in multimodal tasks: Mistral Large 3 can lag behind models optimized specifically for vision tasks and use cases.
  • Complex deployment: Due to its large size (675B parameters) and architecture, the model can be challenging to deploy efficiently with constrained resources or at scale.

Recommended Deployment Settings

Mistral recommends deploying Large 3 in a client-server configuration with the following best practices:

  • System Prompt: Define a clear environment and use case, including guidance on how to effectively leverage tools in agentic systems.
  • Sampling Parameters: Use a temperature below 0.1 for daily-driver and production environments. Higher temperatures may be explored for creative use cases - developers are encouraged to experiment with alternative settings.
  • Tools: Keep the set of tools well-defined and limit their number to the minimum required for the use case. Avoid overloading the model with an excessive number of tools.
  • Vision: When deploying with vision capabilities, maintain an aspect ratio close to 1:1 (width-to-height) for images. Avoid the use of overly thin or wide images - crop them as needed to ensure optimal performance.

Deployment Options

The model supports two quantization formats for single-node deployment:

  • FP8 Quantization: Optimized for NVIDIA B200 and H200 GPUs. Delivers maximum throughput for production workloads with full 256k context window support. The Mistral Large 3 Instruct FP8 format can be used on one 8xH200 node. We recommend to use this format if you plan to fine-tuning as it can be more precise than NVFP4 in some situations.
  • NVFP4 Quantization: Enables deployment on NVIDIA H100 and A100 GPUs. Provides efficient inference with reduced memory footprint on more widely available architectures.

Usage

The model can be used with the following frameworks:

[!Note]
We sadly didn't have enough time to add Mistral Large 3 to transformers, but we would be very happy for a community contribution by opening a PR to huggingface/transformers.

Note 1: We recommend using a relatively low temperature, such as temperature=0.15.

Note 2: Make sure to add a system prompt to the model to best tailor it to your needs. If you want to use the model as a general assistant, we recommend to use the one provided in the SYSTEM_PROMPT.txt file.

vLLM (recommended)

We recommend using this model with vLLM.

Installation

Make sure to install the most recent vLLM:

uv pip install -U vllm \
    --torch-backend=auto \
    --extra-index-url https://wheels.vllm.ai/nightly

Doing so should automatically install mistral_common >= 1.8.6.

To check:

python -c "import mistral_common; print(mistral_common.__version__)"

You can also make use of a ready-to-go docker image or on the docker hub.

Serve

We recommend that you use Mistral Large 3 675B in a server/client setting.

  1. Spin up a server (FP8 quantization for single-node B200/H200 deployment):

Simple

A simple launch command is:

vllm serve mistralai/Mistral-Large-3-675B-Instruct-2512 \
  --tensor-parallel-size 8 \
  --tokenizer_mode mistral --config_format mistral --load_format mistral \
  --enable-auto-tool-choice --tool-call-parser mistral

Key parameter notes:

  • enable-auto-tool-choice: Required when enabling tool usage.
  • tool-call-parser mistral: Required when enabling tool usage.

Additional flags:

  • You can set --max-model-len to preserve memory. By default it is set to 262144 which is quite large but not necessary for most scenarios.
  • You can set --max-num-batched-tokens to balance throughput and latency, higher means higher throughput but higher latency.

Accelerated with speculative decoding

For maximum performance we recommend serving the checkpoint with its customized draft model Mistral-Large-3-675B-Instruct-2512-Eagle:

vllm serve mistralai/Mistral-Large-3-675B-Instruct-2512 \
  --tensor-parallel-size 8 \
  --load-format mistral \
  --tokenizer-mode mistral \
  --config-format mistral \
  --enable-auto-tool-choice \
  --tool-call-parser mistral \
  --limit-mm-per-prompt '{"image": 10}' \
  --speculative_config '{
    "model": "mistralai/Mistral-Large-3-675B-Instruct-2512-Eagle",
    "num_speculative_tokens": 3,
    "method": "eagle",
    "max_model_len": "16384"
  }'

For more information on the draft model, please have a look at Mistral-Large-3-675B-Instruct-2512-Eagle.

Note: Mistral Large 3 675B is optimized for single-node deployment using FP8 quantization (NVIDIA B200, H200) or NVFP4 quantization (NVIDIA H100, A100). Full BF16 requires multi-node setup.

  1. To ping the client you can use a simple Python snippet. See the following examples.

Vision reasoning

Leverage the vision capabilities of Mistral Large 3 675B to analyze images and provide insights:

Python snippet
from datetime import datetime, timedelta

from openai import OpenAI
from huggingface_hub import hf_hub_download

# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

TEMP = 0.15
MAX_TOK = 262144

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

models = client.models.list()
model = models.data[0].id


def load_system_prompt(repo_id: str, filename: str) -> str:
    file_path = hf_hub_download(repo_id=repo_id, filename=filename)
    with open(file_path, "r") as file:
        system_prompt = file.read()
    today = datetime.today().strftime("%Y-%m-%d")
    yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
    model_name = repo_id.split("/")[-1]
    return system_prompt.format(name=model_name, today=today, yesterday=yesterday)


model_id = "mistralai/Mistral-Large-3-675B-Instruct-2512"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"

messages = [
    {"role": "system", "content": SYSTEM_PROMPT},
    {
        "role": "user",
        "content": [
            {
                "type": "text",
                "text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
            },
            {"type": "image_url", "image_url": {"url": image_url}},
        ],
    },
]


response = client.chat.completions.create(
    model=model,
    messages=messages,
    temperature=TEMP,
    max_tokens=MAX_TOK,
)

print(response.choices[0].message.content)

Function calling

Mistral Large 3 675B offers best-in-class agentic capabilities with native function calling:

Python snippet
import json
from openai import OpenAI
from huggingface_hub import hf_hub_download

# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

TEMP = 0.15
MAX_TOK = 262144

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

models = client.models.list()
model = models.data[0].id


def load_system_prompt(repo_id: str, filename: str) -> str:
    file_path = hf_hub_download(repo_id=repo_id, filename=filename)
    with open(file_path, "r") as file:
        system_prompt = file.read()
    return system_prompt


model_id = "mistralai/Mistral-Large-3-675B-Instruct-2512"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")

image_url = "https://math-coaching.com/img/fiche/46/expressions-mathematiques.jpg"


def my_calculator(expression: str) -> str:
    return str(eval(expression))


tools = [
    {
        "type": "function",
        "function": {
            "name": "my_calculator",
            "description": "A calculator that can evaluate a mathematical equation and compute its results.",
            "parameters": {
                "type": "object",
                "properties": {
                    "expression": {
                        "type": "string",
                        "description": "The mathematical expression to evaluate.",
                    },
                },
                "required": ["expression"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "rewrite",
            "description": "Rewrite a given text for improved clarity",
            "parameters": {
                "type": "object",
                "properties": {
                    "text": {
                        "type": "string",
                        "description": "The input text to rewrite",
                    }
                },
            },
        },
    },
]

messages = [
    {"role": "system", "content": SYSTEM_PROMPT},
    {
        "role": "user",
        "content": [
            {
                "type": "text",
                "text": "Thanks to your calculator, compute the results for the equations that involve numbers displayed in the image.",
            },
            {
                "type": "image_url",
                "image_url": {
                    "url": image_url,
                },
            },
        ],
    },
]

response = client.chat.completions.create(
    model=model,
    messages=messages,
    temperature=TEMP,
    max_tokens=MAX_TOK,
    tools=tools,
    tool_choice="auto",
)

tool_calls = response.choices[0].message.tool_calls

results = []
for tool_call in tool_calls:
    function_name = tool_call.function.name
    function_args = tool_call.function.arguments
    if function_name == "my_calculator":
        result = my_calculator(**json.loads(function_args))
        results.append(result)

messages.append({"role": "assistant", "tool_calls": tool_calls})
for tool_call, result in zip(tool_calls, results):
    messages.append(
        {
            "role": "tool",
            "tool_call_id": tool_call.id,
            "name": tool_call.function.name,
            "content": result,
        }
    )


response = client.chat.completions.create(
    model=model,
    messages=messages,
    temperature=TEMP,
    max_tokens=MAX_TOK,
)

print(response.choices[0].message.content)

Instruction following

Mistral Large 3 can follow your instructions down to the letter.

Python snippet
from openai import OpenAI
from huggingface_hub import hf_hub_download

# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

TEMP = 0.15
MAX_TOK = 262144

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

models = client.models.list()
model = models.data[0].id


def load_system_prompt(repo_id: str, filename: str) -> str:
    file_path = hf_hub_download(repo_id=repo_id, filename=filename)
    with open(file_path, "r") as file:
        system_prompt = file.read()
    return system_prompt


model_id = "mistralai/Mistral-Large-3-675B-Instruct-2512"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")

messages = [
    {"role": "system", "content": SYSTEM_PROMPT},
    {
        "role": "user",
        "content": "Write me a sentence where every word starts with the next letter in the alphabet - start with 'a' and end with 'z'.",
    },
]

response = client.chat.completions.create(
    model=model,
    messages=messages,
    temperature=TEMP,
    max_tokens=MAX_TOK,
)

assistant_message = response.choices[0].message.content
print(assistant_message)

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please make sure you have proper rights and permissions for all input image content; if images include people, personal health information, or intellectual property, the images generated will not blur or maintain proportions of image subjects included.

Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here

country_code