GPT OSS 120B Overview
Description:
OpenAI releases the gpt-oss family of open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases. The family consists of the:
gpt-oss-120b
— for production, general purpose, high reasoning use-cases that fits into a single H100 GPU (117B parameters with 5.1B active parameters)gpt-oss-20b
— for lower latency, and local or specialized use-cases (21B parameters with 3.6B active parameters).
The gpt-oss-120b
model is architecturally designed as a Mixture-of-Experts (MoE) model. This model features SwiGLU activations and learned attention sinks within its architecture. It functions as a reasoning model, supporting capabilities such as chain-of-thought processing, adjustable reasoning effort levels, instruction following, and tool use. This model is text-only for both input and output modalities, enabling enterprises and governments to deploy it on-premises or in private cloud environments for enhanced data security and privacy.
Model Highlights:
- Permissive Apache 2.0 license: Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
- Configurable reasoning effort: Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
- Full chain-of-thought: Gain complete access to the model's reasoning process, facilitating easier debugging and increased trust in outputs. It's not intended to be shown to end users.
- Fine-tunable: Fully customize models to your specific use case through parameter fine-tuning.
- Agentic capabilities: Use the models' native capabilities for function calling, web browsing, python code execution, and structured outputs.
This model is ready for commercial/non-commercial use.
Third-Party Community Consideration
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA gpt-oss-120b model card.
License and Terms of Use:
GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Community Model License. Additional Information: Apache License Version 2.0.
Deployment Geography:
Global
Use Case:
Intended for use as a reasoning model with features like chain-of-thought and adjustable reasoning effort levels. It supports instruction following and tool use, offering transparency, customization, and deployment flexibility for developers, researchers, and startups. Additionally, it enables enterprises and governments to deploy on-premises or in private clouds to ensure data security and privacy.
Release Date:
Build.NVIDIA.com - 08/05/2025 via link
Hugging Face - 08/05/2025 via link
Reference(s):
Model Architecture:
Architecture Type: Transformer
Network Architecture: Mixture-of-Experts (MoE)
Total Parameters: 117B
Active Parameters: 5.7B
Vocabulary Size: 201,088
Input:
Input Type(s): Text
Input Format(s): String
Input Parameters: One Dimensional (1D)
Other Properties Related to Input: Uses RoPE with a 128k context length, with attention layers alternating between full context and a sliding 128-token window. Includes a learned attention sink per-head. Employs SwiGLU activations in the MoE layers, and the router performs a Top-K operation (K=4) followed by a Sigmoid function. GEMMs in the MoE include a per-expert bias. Utilizes tiktoken for tokenization. Input Context Length (ISL): 128000
Output:
Output Type(s): Text
Output Format: String
Output Parameters: One Dimensional (1D)
Other Properties Related to Output: The model is designed to be compatible with the OpenAI Responses API and supports Structured Output.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems [or name equivalent hardware preference]. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Software Integration:
Runtime Engine(s):
- NeMo Framework (based on 25.07)
Supported Hardware Microarchitecture Compatibility:
- NVIDIA Blackwell: B200, GB200
- NVIDIA Hopper: H200
Operating System(s): Linux
Model Version(s):
gpt-oss-120b
v1.0 (August 5, 2025)
Training, Testing, and Evaluation Datasets:
Training Dataset:
- Training Data Collection: Undisclosed
- Training Labeling: Undisclosed
- Training Properties: The model has approximately 117 billion parameters. Weights for all layers are in BF16, except for MoE projection weights, which are in MXFP4. The reference implementation currently upcasts all weights to BF16. Activations are expected to be in BF16 or FP8.
Testing Dataset:
- Testing Data Collection: Undisclosed
- Testing Labeling: Undisclosed
- Testing Properties: The model is tested against benchmarks such as MMLU and GPQA, among others including LiveCodeBench, AIME 2024, and MATH-500.
Evaluation Dataset:
- Evaluation Data Collection: Undisclosed
- Evaluation Labeling: Undisclosed
- Evaluation Benchmark Score:
Benchmark | gpt-oss-120b | gpt-oss-20b |
---|---|---|
AIME 2024 (no tools) | 95.8 | 92.1 |
AIME 2024 (with tools) | 96.6 | 96.0 |
AIME 2025 (no tools) | 92.5 | 91.7 |
AIME 2025 (with tools) | 97.9 | 98.7 |
GPQA Diamond (no tools) | 80.1 | 71.5 |
GPQA Diamond (with tools) | 80.9 | 74.2 |
HLE (no tools) | 14.9 | 10.9 |
HLE (with tools) | 19.0 | 17.3 |
MMLU | 90.0 | 85.3 |
SWE-Bench Verified | 62.4 | 60.7 |
Tau-Bench Retail | 67.8 | 54.4 |
Tau-Bench Airline | 49.2 | 38.0 |
Aider Polyglot | 44.4 | 34.2 |
MMMLU (Average) | 81.3 | 75.6 |
HealthBench | 57.6 | 42.5 |
HealthBench Hard | 30.0 | 10.8 |
HealthBench Consensus | 89.9 | 82.6 |
Codeforces (no tools) [elo] | 2463 | 2230 |
Codeforces (with tools) [elo] | 2622 | 2516 |
Above scores were measured for the high reasoning level.
Safety Results:
The following evaluations check that the model does not comply with requests for content that is
disallowed under OpenAI’s safety policies, including hateful content or illicit advice.
Category | gpt-oss-120b | gpt-oss-20b |
---|---|---|
hate (aggregate) | 0.996 | 0.996 |
self-harm/intent and selfharm/instructions | 0.995 | 0.984 |
personal data/semi restrictive | 0.967 | 0.947 |
sexual/exploitative | 1.000 | 0.980 |
sexual/minors | 1.000 | 0.971 |
illicit/non-violent | 1.000 | 0.983 |
illicit/violent | 1.000 | 1.000 |
personal data/restricted | 0.996 | 0.978 |
Inference:
Acceleration Engine: vLLM
Test Hardware: NVIDIA Blackwell: GB200
Additional Details
The model is released with the native quantization support. Specifically, MXFP4 is used for the linear projection weights in the MoE layer. It is stored the MoE tensor in two parts:
tensor.blocks
stores the actual fp4 values. Every two values are packed in oneuint8
value.tensor.scales
stores the block scale. The block scaling is done among the last dimension for all MXFP4 tensors.
All other tensors are stored in BF16. It is recommended to use BF16 as the activation precision for the model.
Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.