ibm / granite-3.0-8b-instruct

Granite-3.0-8B-Instruct

Model Summary

Granite-3.0-8B-Instruct is a 8B parameter model finetuned from Granite-3.0-8B-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.

Third-Party Community Consideration

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA Granite-3.0-8B-Base model card.

License/Terms of Use:

GOVERNING TERMS: The trial service is governed by the NVIDIA API Trial Terms of Service; and the use of this model is governed by the NVIDIA AI Foundation Models Community License Agreement. ADDITIONAL INFORMATION: Apache 2.0 License.

Model Architecture:

Architecture Type: [Transformer]

Network Architecture: [Other - Dense]

Granite-3.0-8B-Instruct is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embbeddings.

Model2B Dense8B Dense1B MoE3B MoE
Embedding size2048409610241536
Number of layers40402432
Attention head size641286464
Number of attention heads32321624
Number of KV heads8888
MLP hidden size819212800512512
MLP activationSwiGLUSwiGLUSwiGLUSwiGLU
Number of Experts3240
MoE TopK88
Initialization std0.10.10.10.1
Sequence Length4096409640964096
Position EmbeddingRoPERoPERoPERoPE
# Paremeters2.5B8.1B1.3B3.3B
# Active Parameters2.5B8.1B400M800M
# Training tokens12T12T10T10T

Usage

Intended use

The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including bussiness applications.

Capabilities

  • Summarization
  • Text classification
  • Text extraction
  • Question-answering
  • Retrieval Augmented Generation (RAG)
  • Code related
  • Function-calling
  • Multilingual dialog use cases

Input:

Input Type(s): Text

Input Format(s): String

Input Parameters: min_tokens, max_tokens, temperature, top_p, stop, frequency_penalty, presence_penalty

Other Properties Related to Input: Supported Languages include
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)

Output:

Output Type(s): Text

Output Format: String

Output Parameters: None

Other Properties Related to Output: [None]

Generation

This is a simple example of how to use Granite-3.0-8B-Instruct model.

Install the following libraries:

pip install torch torchvision torchaudio
pip install accelerate
pip install transformers

Then, copy the snippet from the section that is relevant for your usecase.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "auto"
model_path = "ibm-granite/granite-3.0-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
chat = [
    { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens, 
                        max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output)

Training Data

Granite Language Instruct models are trained on a selection of open-srouce instruction datasets with a non-restrictive license, as well as a collection of synthetic datasets created by IBM. Together, these instruction datasets are a solid representation of the following domains: English, multilingual, code, math, tools, and safety.

Infrastructure

We train the Granite Language models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.

Model Version(s):

Granite-Dense-3.0-instruct

Ethical Considerations and Limitations

Granite instruct models are primarily finetuned using instruction-response pairs mostly in English, but also in German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese (Simplified). As this model has been exposed to multilingual data, it can handle multilingual dialog use cases with a limited performance in non-English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. The model also inherits ethical considerations and limitations from its base model.

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications.  When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.  

Please report security vulnerabilities or NVIDIA AI Concerns here.