marin / marin-8b-instruct

Marin 8B Instruct Overview

Description:

Marin 8B Instruct is a Transformer-style autoregressive language model, fine-tuned from marin-8b-base, designed to follow instructions and engage in dialogue. This model is intended for tasks such as question answering, summarization, code generation, and dialogue.

  • Developed by: The Marin team at Stanford CRFM.
  • Model type: a Transformer style autoregressive language model.
  • Knowledge Cutoff: ~July 2024
  • Language(s) (NLP): English
  • License: The code and model are released under Apache 2.0.
  • Contact: dlwh at stanford.edu

This model is ready for non-commercial/research use.

Third-Party Community Consideration

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA marin-8b-instruct Model Card.

License and Terms of use:

GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Community Model License. Additional Information: Apache 2.0.

Deployment Geography:

Global

Use Case:

The Marin 8B Instruct model is designed for tasks requiring instruction comprehension and generation, such as question answering, summarization, code generation, and dialogue. It is positioned as a research artifact or a foundational instruct model upon which others can build and implement their own safety protocols.

Release Date:

  • Build.nvidia.com: May 2025
  • Huggingface: May 2025

Reference(s):

Model Architecture:

  • Architecture Type: Transformer (Autoregressive Language Model)
  • Network Architecture: Llama 3 8B
  • This model was developed based on marin-8b-base.
  • This model has 8.03 billion model parameters.
    • Hidden Size: 4096
    • Feedforward Size: 14336
    • Number of Layers: 32
    • Number of Attention Heads: 32
    • Number of Key-Value (KV) Heads: 8 (Grouped-Query Attention)

Input:

  • Input Type(s): Text
  • Input Format(s): String
  • Input Parameters: 1D
  • Other Properties Related to Input: 4K Context Window Length

Output:

  • Output Type(s): Text
  • Output Format: String
  • Output Parameters: 1D
  • Other Properties Related to Output: Generates text based on input instructions. Knowledge cutoff is around July 2024.

Our Al models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

[Preferred/Supported] Operating System(s):

  • Linux
  • Windows
  • MacOS (via Hugging Face Transformers library compatibility)

Model Version(s):

marin-8b-instruct v1.0

Training, Testing, and Evaluation Datasets:

Training Dataset:

The Marin-8b-Instruct model was adapted from marin-8b-base through Supervised Fine-Tuning (SFT) for an additional 5.3 billion tokens.

  • Data Collection Method by dataset: Hybrid: Automated, Human
  • Labeling Method by dataset: Hybrid: Automated, Human
  • Properties: Undisclosed

Datasets used in Marin 8B Base

Nemotron-CC

A full report is available on our ReadTheDocs site.

Datasets used in Marin 8B Instruct

Marin 8B Instruct is currently an SFT-only model. It was trained on the following datasets:

Testing Dataset:

  • Data Collection Method: Undisclosed
  • Labeling Method: Undisclosed
  • Properties: Undisclosed

Evaluation Dataset:

  • Data Collection Method: Undisclosed
  • Labeling Method: Undisclosed
  • Properties: Undisclosed

Base Model Evaluation Results

We ran a suite of standard benchmarks to compare our model with Llama 3.1 8B, and the open source 7-8B models Olmo 2 7B, and MAP NEO 7B.
For all benchmarks, we used LM Eval Harness with the default setup for each task. (These numbers may differ from reported results due to differences in setup. LM Eval Harness is usually somewhat stricter than other harnesses.)

AverageAGI Eval LSAT-ARARC EasyARC ChallengeBBHBoolQCommonSense QACOPAGPQAHellaSwag 0-shotHellaSwag 10-shotlambada_openaiMMLU 5-shotMMLU 0-shotMMLU ProOpenBookQAPIQAWinoGrandeWSC
Marin 8B Base (Starling)68.320.986.563.150.685.979.192.030.382.383.674.767.665.936.544.284.474.582.1
Llama 3.1 Base67.020.485.858.946.484.275.292.032.379.481.974.766.465.533.345.882.974.483.5
OLMo 2 Base66.717.485.060.744.485.575.489.026.880.581.773.163.961.930.646.282.574.386.1
MAP NEO 7B62.223.081.152.042.484.781.782.027.872.573.364.658.256.4TODO39.479.066.173.3

Marin 8B Base fares well on most tasks.

Inference:

  • Engine: TensorRT-LLM
  • Test Hardware: L40s

Additional Information

  • Developed by: The Marin Project / Marin Community, closely associated with Stanford University's Center for Research on Foundation Models (CRFM).
  • Primary Contact: David Hall (dlwh at stanford.edu) is listed as the primary contact for the Marin 8B models on their Hugging Face model cards.
  • Training Framework: Developed using the stanford-crfm/levanter training framework, which uses JAX and Named Tensors.
  • Training Logs: Public Weights & Biases (W&B) logs are available for the Marin 8B training runs.
  • Tokenizer: stanford-crfm/marin-tokenizer (variant of Llama 3 tokenizer).
  • Philosophy: The Marin Community operates as "an open lab for building foundation models collaboratively," emphasizing open sharing of source code, datasets, experimental methodologies, and mistakes.
  • Distinction: Marin Community (AI research project) is distinct from Marin Software (digital advertising company).
  • Training Checkpoints (for base model): Kestrel, Ocelot, Jellyfish, Phoenix, Starling, and deeper-starling (13.7T tokens).

Bias, Risks, and Limitations

Like any base language model or fine-tuned model without safety filtering, these models can easily be prompted by users to generate harmful and sensitive content. Such content may also be produced unintentionally, especially in cases involving bias, so we recommend that users consider the risks when applying this technology. Additionally, many statements from Marin or any LLM are often inaccurate, so responses should be verified.

Marin 8B has not undergone any safety tuning or evaluation. We strongly recommend that users use this model with caution and consider the risks when applying this technology. In particular, this model is not intended for fully autonomous use.

Ethical Considerations:

NVIDIA believes Trustworthy Al is a shared responsibility and we have established policies and practices to enable development for a wide array of Al applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns here.