Model Overview
Description
snowflake-arctic-embed is a suite of text embedding models that creates high-quality retrieval models optimized for performance. These models are ready for commercial use free-of-charge.
The snowflake-arctic-embedding models achieve state-of-the-art performance on the MTEB/BEIR leaderboard for each of their size variants. As shown below, each class of model size achieves SOTA retrieval accuracy compared to other top models.
The models are trained by leveraging existing open-source text representation models, such as bert-base-uncased, and are trained in a multi-stage pipeline to optimize their retrieval performance. Following pretraining, models are further optimized with long training on a smaller dataset (about 1m samples) of triplets of query, positive document, and negative document derived from hard harmful mining. Mining of the negatives and data curation is crucial to retrieval accuracy.
Name | MTEB Retrieval Score (NDCG @ 10) | Parameters (Millions) | Embedding Dimension |
---|---|---|---|
snowflake-arctic-embed-xs | 50.15 | 22 | 384 |
snowflake-arctic-embed-s | 51.98 | 33 | 384 |
snowflake-arctic-embed-m | 54.90 | 110 | 768 |
snowflake-arctic-embed-m-long | 54.83 | 137 | 768 |
snowflake-arctic-embed-l | 55.98 | 335 | 1024 |
Based on the intfloat/e5-large-unsupervised model, the large model is a direct drop-in for closed APIs and delivers the most accurate retrieval experience.
Model Name | MTEB Retrieval Score (NDCG @ 10) |
---|---|
snowflake-arctic-embed-l | 55.98 |
Google-gecko-text-embedding | 55.7 |
text-embedding-3-large | 55.44 |
Cohere-embed-english-v3.0 | 55.00 |
bge-large-en-v1.5 | 54.29 |
UAE-Large-V1 | 54.66 |
bge-large-en-v1.5 | 54.29 |
mxbai-embed-large-v1 | 54.39 |
e5-Large-v2 | 50.56 |
Terms of use
Arctic is licensed under the Apache-2.
References
Model Architecture
Architecture Type: Transformer
Network Architecture: Fine-tuned E5-Large-Unsupervised Retriever
Input
Input Type: Text
Input Format: List of strings
Output
Output Type: Floating Points
Output Format: list of float arrays
Other Properties Related to Output: Each array contains the embeddings for the corresponding input string.
Model Version
snowflake-arctic-embed-l
Supported Operating System(s):
- Linux
Training Dataset:
Properties (Quantity, Dataset Descriptions, Sensor(s)): Pretrained on large batches of query-document pairs where negatives are derived in-batch—pretraining leverages about 400m samples of a mix of public datasets and proprietary web search data.
Inference:
Engine: TensorRT-LLM with Triton
Test Hardware: L40