Mistral-Small-3.1-24B-Instruct-2503 Overview
Model Overview
Description
Mistral Small 3.1 (2503) builds upon Mistral Small 3 (2501) by adding state-of-the-art vision understanding and enhancing long context capabilities up to 128k tokens without compromising text performance. With 24 billion parameters, this model achieves top-tier capabilities in both text and vision tasks. Key features include vision capabilities, multilingual support, agent-centric design with native function calling and JSON outputting, advanced reasoning, a 128k context window, strong adherence to system prompts, and utilization of a Tekken tokenizer with a 131k vocabulary size. This model is ready for commercial and non-commercial use.
Multilingual Capabilities: English, French, German, Japanese, Korean, Chinese, and more.
Third-Party Community Consideration
This model is not owned or developed by NVIDIA. It has been developed by Mistral AI and built to a third-party’s requirements. For more details, see the Mistral-Small-3.1-24B-Instruct-2503 Model Card.
License and Terms of Use
GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Community Model License. Additional Information: Apache 2.0.
References
Deployment Geography
- Global
Use Cases
- Fast-response conversational agents
- Low-latency function calling
- Subject matter expertise via fine-tuning
- Local inference for hobbyists and organizations handling sensitive data
- Programming and mathematical reasoning
- Long document understanding
- Visual understanding
Release Date
Model Architecture
- Architecture Type: Transformer-based Language Model
- Network Architecture: Instruction-tuned, multimodal, Transformer-based
- Base Model: Mistral-Small-3.1-24B-Base-2503
- Model Parameters: 24 billion
Input
- Types: Text, Image
- Formats: Text: String. Image: Red, Green, Blue (RGB)
- Parameters: Image: Two-Dimensional (2D). Text: One-Dimensional (1D)
- Additional Properties: Minimum resolution and pre-processing required for images; token limitation per context window up to 128k.
Output
- Types: Text
- Formats: String, JSON (function calling)
- Parameters: One-Dimensional (1D)
- Additional Properties: Post-processing recommended (text formatting, JSON parsing)
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Supported Hardware Microarchitecture Compatibility
- NVIDIA Ampere
- NVIDIA Lovelace (e.g., RTX 4090)
Preferred/Supported Operating Systems
- Linux
- Windows
- macOS
Model Versions
- Mistral-Small-3.1-24B-Instruct-2503 v1.0
Training, Testing, and Evaluation Datasets:
Training Dataset :
Data Collection Method by dataset: Undisclosed
Labeling Method by dataset: Undisclosed
Properties: Undisclosed
Testing Dataset:
Data Collection Method by dataset: Undisclosed
Labeling Method by dataset: Undisclosed
Properties: Undisclosed
Evaluation Benchmark Results
When available, we report numbers previously published by other model providers, otherwise we re-evaluate them using our own evaluation harness.
Pretrain Evals
Model | MMLU (5-shot) | MMLU Pro (5-shot CoT) | TriviaQA | GPQA Main (5-shot CoT) | MMMU |
---|---|---|---|---|---|
Small 3.1 24B Base | 81.01% | 56.03% | 80.50% | 37.50% | 59.27% |
Gemma 3 27B PT | 78.60% | 52.20% | 81.30% | 24.30% | 56.10% |
Instruction Evals
Text
Model | MMLU | MMLU Pro (5-shot CoT) | MATH | GPQA Main (5-shot CoT) | GPQA Diamond (5-shot CoT ) | MBPP | HumanEval | SimpleQA (TotalAcc) |
---|---|---|---|---|---|---|---|---|
Small 3.1 24B Instruct | 80.62% | 66.76% | 69.30% | 44.42% | 45.96% | 74.71% | 88.41% | 10.43% |
Gemma 3 27B IT | 76.90% | 67.50% | 89.00% | 36.83% | 42.40% | 74.40% | 87.80% | 10.00% |
GPT4o Mini | 82.00% | 61.70% | 70.20% | 40.20% | 39.39% | 84.82% | 87.20% | 9.50% |
Claude 3.5 Haiku | 77.60% | 65.00% | 69.20% | 37.05% | 41.60% | 85.60% | 88.10% | 8.02% |
Cohere Aya-Vision 32B | 72.14% | 47.16% | 41.98% | 34.38% | 33.84% | 70.43% | 62.20% | 7.65% |
Vision
Model | MMMU | MMMU PRO | Mathvista | ChartQA | DocVQA | AI2D | MM MT Bench |
---|---|---|---|---|---|---|---|
Small 3.1 24B Instruct | 64.00% | 49.25% | 68.91% | 86.24% | 94.08% | 93.72% | 7.3 |
Gemma 3 27B IT | 64.90% | 48.38% | 67.60% | 76.00% | 86.60% | 84.50% | 7 |
GPT4o Mini | 59.40% | 37.60% | 56.70% | 76.80% | 86.70% | 88.10% | 6.6 |
Claude 3.5 Haiku | 60.50% | 45.03% | 61.60% | 87.20% | 90.00% | 92.10% | 6.5 |
Cohere Aya-Vision 32B | 48.20% | 31.50% | 50.10% | 63.04% | 72.40% | 82.57% | 4.1 |
Multilingual Evals
Model | Average | European | East Asian | Middle Eastern |
---|---|---|---|---|
Small 3.1 24B Instruct | 71.18% | 75.30% | 69.17% | 69.08% |
Gemma 3 27B IT | 70.19% | 74.14% | 65.65% | 70.76% |
GPT4o Mini | 70.36% | 74.21% | 65.96% | 70.90% |
Claude 3.5 Haiku | 70.16% | 73.45% | 67.05% | 70.00% |
Cohere Aya-Vision 32B | 62.15% | 64.70% | 57.61% | 64.12% |
Long Context Evals
Model | LongBench v2 | RULER 32K | RULER 128K |
---|---|---|---|
Small 3.1 24B Instruct | 37.18% | 93.96% | 81.20% |
Gemma 3 27B IT | 34.59% | 91.10% | 66.00% |
GPT4o Mini | 29.30% | 90.20% | 65.8% |
Claude 3.5 Haiku | 35.19% | 92.60% | 91.90% |
Basic Instruct Template (V7-Tekken)
<s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]
<system_prompt>
, <user message>
and <assistant response>
are placeholders.
Please make sure to use mistral-common as the source of truth
Usage
The model can be used with the following frameworks;
vllm (recommended)
: See here
Note 1: We recommend using a relatively low temperature, such as temperature=0.15
.
Note 2: Make sure to add a system prompt to the model to best tailor it for your needs. If you want to use the model as a general assistant, we recommend the following
system prompt:
system_prompt = """You are Mistral Small 3.1, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
You power an AI assistant called Le Chat.
Your knowledge base was last updated on 2023-10-01.
The current date is {today}.
When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?").
You are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date.
You follow these instructions in all languages, and always respond to the user in the language they use or request.
Next sections describe the capabilities that you have.
# WEB BROWSING INSTRUCTIONS
You cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat.
# MULTI-MODAL INSTRUCTIONS
You have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos.
You cannot read nor transcribe audio files or videos."""
Inference
Engine: vLLM (recommended)
Test Hardware:
- RTX 4090 or equivalent GPU (minimum 55 GB GPU RAM)
- NVIDIA L40S
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns here.