Model Overview
Description:
Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models!
Perfect for:
- Fast response conversational agents.
- Low latency function calling.
- Subject matter experts via fine-tuning.
- Local inference for hobbyists and organizations handling sensitive data.
This model is ready for commercial use.
Key features
- Multilingual: Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish.
- Agent-Centric: Offers best-in-class agentic capabilities with native function calling and JSON outputting.
- Advanced Reasoning: State-of-the-art conversational and reasoning capabilities.
- Apache 2.0 License: Open license allowing usage and modification for both commercial and non-commercial purposes.
- Context Window: A 32k context window.
- System Prompt: Maintains strong adherence and support for system prompts.
- Tokenizer: Utilizes a Tekken tokenizer with a 131k vocabulary size.
Third-Party Community Consideration:
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA Mistral Small 3 release announcement.
License & Terms of use
GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Community Model License. Additional Information: Apache 2.0.
References(s):
Mistral Small 3 Blogpost
Mistral-Small-24B-Instruct [HuggingFace] (https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501)
Model Architecture:
Architecture Type: Transformer
Network Architecture: Mistral
Model Version: Small 3 ( 2501 )
This transformer model has the following characteristics:
- Layers: 40
- Dim: 5,120
- Head dim: 128
- Hidden dim: 32,768
- Activation Function: SwiGLU
- Number of heads: 32
- Number of kv-heads: 8 (GQA)
- Rotary embeddings (theta = 1M)
- Vocabulary size: 32,768
- Context length: 32,768
Input
- Input Type: Text
- Input Format: String
- Input Parameters: 1D
- Other Properties: max_tokens, temperature, top_p, stop, frequency_penalty, presence_penalty, seed
Output
- Output Type: Text
- Output Format: String
- Output Parameters: 1D
Benchmarks
Publicly accesible benchmarks
Reasoning & Knowledge
Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
---|---|---|---|---|---|
mmlu_pro_5shot_cot_instruct | 0.663 | 0.536 | 0.666 | 0.683 | 0.617 |
gpqa_main_cot_5shot_instruct | 0.453 | 0.344 | 0.531 | 0.404 | 0.377 |
Math & Coding
Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
---|---|---|---|---|---|
humaneval_instruct_pass@1 | 0.848 | 0.732 | 0.854 | 0.909 | 0.890 |
math_instruct | 0.706 | 0.535 | 0.743 | 0.819 | 0.761 |
Instruction following
Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
---|---|---|---|---|---|
mtbench_dev | 8.35 | 7.86 | 7.96 | 8.26 | 8.33 |
wildbench | 52.27 | 48.21 | 50.04 | 52.73 | 56.13 |
arena_hard | 0.873 | 0.788 | 0.840 | 0.860 | 0.897 |
ifeval | 0.829 | 0.8065 | 0.8835 | 0.8401 | 0.8499 |
Note:
- Performance accuracy on all benchmarks were obtained through the same internal evaluation pipeline - as such, numbers may vary slightly from previously reported performance
(Qwen2.5-32B-Instruct, Llama-3.3-70B-Instruct, Gemma-2-27B-IT). - Judge based evals such as Wildbench, Arena hard and MTBench were based on gpt-4o-2024-05-13.
Software Integration:
Runtime Engine(s): TensorRT-LLM
Supported Hardware Microarchitecture Compatibility: NVIDIA Ampere, NVIDIA Blackwell, NVIDIA Jetson, NVIDIA Hopper, NVIDIA Lovelace, NVIDIA Pascal, NVIDIA Turing, and NVIDIA Volta architecture
[Preferred/Supported] Operating System(s): Linux
Model Version(s):
mistral-small-24b-instruct v1.0
Training, Testing, and Evaluation Datasets:
Training Dataset:
Data Collection Method by dataset: Human, Unknown
Labeling Method by dataset: Human, Unknown
Testing Dataset:
Data Collection Method by dataset: Human, Unknown
Labeling Method by dataset: Human, Unknown
Evaluation Dataset:
Data Collection Method by dataset: Human, Unknown
Labeling Method by dataset: Human, Unknown
Inference:
Engine: TensorRT-LLM
Test Hardware: L40S
Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns here.