ai21labs / jamba-1.5-large-instruct

Model Overview

Description:

Jamba 1.5 Large is a state-of-the-art, hybrid SSM-Transformer instruction following foundation model. It's a Mixture-of-Expert model with 94B total parameters and 398B active parameters.

The Jamba family of models are the most powerful & efficient long-context models on the market, and the only ones with an effective context window of 256K. For long context input, they deliver up to 2.5X faster inference than leading models of comparable sizes.

Jamba supports function calling/tool use, structured output (JSON), and grounded generation with citation mode and documents API.

Jamba officially supports English, French, Spanish, Portuguese, German, Arabic and Hebrew, but can also work in many other languages.

Third-Party Community Consideration:

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case. Jamba 1.5 mini is developed by AI21 Labs and is available under the Jamba Open Model License for research and non-commercial use. For commercial use requiring self-deployment, a Jamba Commercial License must be acquired by contacting AI21 Labs.

Terms of Use

GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service; and the use of this model is governed by the Jamba Open License Agreement.

References(s):

Jamba 1.5 blogpost

Model Architecture:

Architecture Type: Jamba (Joint Attention Mamba)

Network Architecture: Jamba

Model Version: 1.5

Input:

Input Type: Text

Input Format: String

Input Parameters: One Dimensional (1D), Max Tokens, Temperature, Top P

Max Input Tokens: 256,000

Output:

Output Type: Text

Output Format: String

Output Parameters: 1D

Max Output Tokens: 256,000

Software Integration:

Supported Hardware Platform(s): NVIDIA Ampere, NVIDIA Hopper

Supported Operating System(s): Linux

Benchmarks:

CategoryMetricScore
GeneralArena Hard65.4
GeneralMMLU (CoT)81.2
GeneralMMLU Pro (CoT)53.5
GeneralIFEval81.5
GeneralBBH65.5
GeneralWildBench48.4
ReasoningARC-C93
ReasoningGPQA36.9
Math, Code & Tool useGSM8K87
Math, Code & Tool useHumanEval71.3
Math, Code & Tool useBFCL85.5

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.