openai / gpt-oss-20b

GPT OSS 20B Overview

Description:

OpenAI releases the gpt-oss family of open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases. The family consists of the:

  • gpt-oss-120b — for production, general purpose, high reasoning use-cases that fits into a single H100 GPU (117B parameters with 5.1B active parameters)
  • gpt-oss-20b — for lower latency, and local or specialized use-cases (21B parameters with 3.6B active parameters).

The gpt-oss-20b is designed as a Mixture-of-Experts (MoE) model, structurally identical to the larger 117B variant, albeit with different hyperparameters. This model leverages SwiGLU activations and incorporates learned attention sinks within its architecture. Functionally, it serves as a robust reasoning model, supporting advanced capabilities such as chain-of-thought processing, adjustable reasoning effort levels, instruction following, and tool use. It operates strictly with text-only modalities for both input and output. A key strategic benefit is its suitability for enterprises and governments, facilitating on-premises or private cloud deployment to ensure enhanced data security and privacy.

Model Highlights:

  • Permissive Apache 2.0 license: Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
  • Configurable reasoning effort: Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
  • Full chain-of-thought: Gain complete access to the model's reasoning process, facilitating easier debugging and increased trust in outputs. It's not intended to be shown to end users.
  • Fine-tunable: Fully customize models to your specific use case through parameter fine-tuning.
  • Agentic capabilities: Use the models' native capabilities for function calling, web browsing, python code execution, and structured outputs.

This model is ready for commercial/non-commercial use.

Third-Party Community Consideration

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA gpt-oss-20b model card.

License and Terms of Use:

GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Community Model License. Additional Information: Apache License Version 2.0.

Deployment Geography:

Global

Use Case:

Intended for use as a reasoning model, offering features like chain-of-thought and adjustable reasoning effort levels. It provides comprehensive support for instruction following and tool use, fostering transparency, customization, and deployment flexibility for developers, researchers, and startups. Crucially, it enables enterprises and governments to deploy on-premises or in private clouds, ensuring stringent data security and privacy requirements are met.

Release Date:

Build.NVIDIA.com - 08/05/2025 via link

Hugging Face - 08/05/2025 via link

Reference(s):

Model Architecture:

Architecture Type: Transformer

Network Architecture: Mixture-of-Experts (MoE)

Total Parameters: 20B

Active Parameters: 4B

Vocabulary Size: 201,088 (Utilizes the standard tokenizer used by GPT-4o)

Input:

Input Type(s): Text

Input Format(s): String

Input Parameters: One Dimensional (1D)

Other Properties Related to Input: Uses RoPE with a 128k context length, with attention layers alternating between full context and a sliding 128-token window. Includes a learned attention sink per-head. Employs SwiGLU activations in the MoE layers, and the router performs a Top-K operation (K=4) followed by a Sigmoid function. GEMMs in the MoE include a per-expert bias. Utilizes tiktoken for tokenization. Input Context Length (ISL): 128000

Output:

Output Type(s): Text

Output Format: String

Output Parameters: One Dimensional (1D)

Other Properties Related to Output: The model is architected to be compatible with the OpenAI Responses API and supports Structured Output, aligning with key partner expectations for advanced response formatting.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems [or name equivalent hardware preference]. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engine(s):

  • NeMo Framework (based on 25.07)

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Blackwell: B200, GB200
  • NVIDIA Hopper: H200

Operating System(s): Linux

Model Version(s):

gpt-oss-20b v1.0 (August 5, 2025)

Training, Testing, and Evaluation Datasets:

Training Dataset:

  • Training Data Collection: Undisclosed
  • Training Labeling: Undisclosed
  • Training Properties: The gpt-oss-20b model has approximately 20 billion total parameters, with approximately 4 billion active parameters per inference. The weights for all layers are in BF16, except for the MoE projection weights, which are in MXFP4. The reference implementation, for initial accuracy validation, currently upcasts all weights to BF16. Activations are expected to be in BF16 or FP8.

Testing Dataset:

  • Testing Data Collection: Undisclosed
  • Testing Labeling: Undisclosed
  • Testing Properties: The model's performance is tested against recognized benchmarks such as MMLU (Massive Multitask Language Understanding) and GPQA (General Purpose Question Answering), alongside other benchmarks including LiveCodeBench, AIME 2024, and MATH-500

Evaluation Dataset:

  • Evaluation Data Collection: Undisclosed
  • Evaluation Labeling: Undisclosed
  • Evaluation Benchmark Score:
Benchmarkgpt-oss-120bgpt-oss-20b
AIME 2024 (no tools)95.892.1
AIME 2024 (with tools)96.696.0
AIME 2025 (no tools)92.591.7
AIME 2025 (with tools)97.998.7
GPQA Diamond (no tools)80.171.5
GPQA Diamond (with tools)80.974.2
HLE (no tools)14.910.9
HLE (with tools)19.017.3
MMLU90.085.3
SWE-Bench Verified62.460.7
Tau-Bench Retail67.854.4
Tau-Bench Airline49.238.0
Aider Polyglot44.434.2
MMMLU (Average)81.375.6
HealthBench57.642.5
HealthBench Hard30.010.8
HealthBench Consensus89.982.6
Codeforces (no tools) [elo]24632230
Codeforces (with tools) [elo]26222516

Above scores were measured for the high reasoning level.

Safety Results:

The following evaluations check that the model does not comply with requests for content that is
disallowed under OpenAI’s safety policies, including hateful content or illicit advice.

Categorygpt-oss-120bgpt-oss-20b
hate (aggregate)0.9960.996
self-harm/intent and selfharm/instructions0.9950.984
personal data/semi restrictive0.9670.947
sexual/exploitative1.0000.980
sexual/minors1.0000.971
illicit/non-violent1.0000.983
illicit/violent1.0001.000
personal data/restricted0.9960.978

Inference:

Acceleration Engine: vLLM

Test Hardware: NVIDIA Hopper (H200)

Additional Details

The model is released with the native quantization support. Specifically, MXFP4 is used for the linear projection weights in the MoE layer. It is stored the MoE tensor in two parts:

  • tensor.blocks stores the actual fp4 values. Every two values are packed in one uint8 value.
  • tensor.scales stores the block scale. The block scaling is done among the last dimension for all MXFP4 tensors.

All other tensors are stored in BF16. It is recommended to use BF16 as the activation precision for the model.

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.