microsoft / phi-3-small-128k-instruct

Model Summary

Developer:Microsoft GenAI
DescriptionPhi-3-Small is a lightweight, state-of-the-art open model built upon datasets used for Phi-2 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family, and the small version comes in two variants 8K and 128K which is the context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. This model is ready for commercial and research use.
LicenseMIT
Third-Party Community ConsiderationThis model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case.
ArchitecturePhi-3 Small has 7B parameters and is a dense decoder-only Transformer model with tiktoken tokenizer.
InputsText.
Context length128K tokens
GPUS1024 H100-80G
Training time18 days
Training data4.8T tokens
OutputsGenerates text in response to the input
DatesThe models were trained between Feb 2024 and April 2024
StatusThis is a static model trained on an offline dataset with cutoff date October 2023 for publicly available data. Future versions of the tuned models may be released as the authors improve models.

Intended Use

Primary use casesThe model provides uses for applications which require 1) memory/compute constrained environments; 2) latency bound scenarios; 3) strong reasoning (especially math and logic). The model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
Out-of-scope use casesThe models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.

Data Overview

Training datasets

The training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1) publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.

Benchmark datasets

The authors evaluated the model across a breadth of public and internal benchmarks to understand the model capabilities in the most comprehensive way under multiple tasks and conditions. More specifically,

  • Reasoning:

    • Winogrande: commonsense reasoning around pronoun resolution

    • PIQA: physical commonsense reasoning around everyday situations

    • ARC-easy, ARC-challenge: grade-school multiple choice science questions at easy and challenge level

    • CommonsensQA: generic commonsense questions

    • MedQA: medical questions answering

    • Social IQA: social commonsense intelligence

    • BoolQ: natural questions from context

    • TruthfulQA: grounded reasoning

  • Language understanding:

    • HellaSwag: commonsense natural language inference around everyday events

    • ANLI: adversarial natural language inference

    • LAMBADA: word prediction given a passage.

  • World knowledge:

    • Natural Questions: question about Wikipedia knowledge

    • TriviaQA: trivia question on general topics

  • Math:

    • GSM8K: grade-school math word problems

    • GSM8K Hard: grade-school math word problems with large values and some absurdity.

    • MATH: challenging competition math problems

  • Code:

    • HumanEval, MBPP: python coding tasks

    • Spider: SQL query tasks

  • Multilingual:

    • MGSM: multilingual grade-school math

    • MEGA: multilingual NLP tasks

  • Popular aggregated datasets: MMLU, BigBench-Hard, AGI Eval

  • Long context:

    • GovReport, QMSum, SQuALITY, SummScreenFD: long context summarization

    • Qasper: long context question answer

  • Multi-turn conversations:

    • Data generated by In-house adversarial conversation simulation tool
  • Single-turn trustworthiness evaluation:

    • DecodingTrust: a collection of trustworthiness benchmark in eight different perspectives

    • XSTest: exaggerated safety evaluation

    • Toxigen: adversarial and hate speech detection

  • Red Team:

    • Responses to prompts provided by AI Red Team at Microsoft

Safety

Approach

Phi-3 family of models has adopted a robust safety post-training approach. This approach leverages a variety of both open-source and in-house generated datasets. The overall technique employed to do the safety alignment is a combination of SFT (Supervised Fine-Tuning) and a modified version of RLHF (Reinforcement Learning from Human Feedback) by utilizing human-labeled and synthetic datasets, including publicly available datasets focusing on helpfulness and harmlessness as well as various questions and answers targeted to multiple safety categories.

Safety Evaluation and Red-Teaming

Prior to release, Phi-3 family of models followed a multi-faceted evaluation approach. Quantitative evaluation was conducted with multiple open-source safety benchmarks and in-house tools utilizing adversarial conversation simulation. For qualitative safety evaluation, the authors collaborated with the AI Red Team at Microsoft to assess safety risks posed by Phi-3-small in both average and adversarial user scenarios. The assessment was done in predetermined eight risk categories with automated scoring followed by thorough manual reviews of the model responses.

Please refer to the technical report for more details of the safety alignment.

Model Quality

To understand the capabilities, the authors compare Phi-3 Small with a set of models over a variety of benchmarks using the internal benchmark platform BabelBench (See Appendix A for benchmark methodology).

At the high-level overview of the model quality on representative benchmarks:

CategoryBenchmarkPhi-3 Small-128K-InstructGemma-7BMixtral-8x7BLlama-3-8B-InstructGPT3.5-Turbo-1106Gemini ProGPT-4-Turbo-1106 (Chat)
Popular aggregated benchmarkAGI Eval43.942.145.24248.449.059.6
MMLU75.563.670.566.571.466.784.0
BigBench Hard77.659.669.751.568.375.687.7
Language UnderstandingANLI55.848.755.257.358.164.271.7
HellaSwag79.649.870.471.178.876.288.3
ReasoningARC Challenge90.878.387.382.887.488.395.6
ARC Easy97.391.495.693.496.396.198.8
BoolQ83.76676.680.979.186.491.3
CommonsenseQA80.876.278.17979.681.886.7
MedQA46.349.662.260.563.458.283.7
OpenBookQA87.878.685.882.68686.493.4
PIQA88.178.18675.786.686.290.1
Social IQA78.765.575.973.968.375.481.7
TruthfulQA (MC2)69.652.160.163.267.772.685.2
WinoGrande80.155.6626568.872.286.7
Factual KnowledgeTriviaQA66.072.382.267.785.880.273.3
MathGSM8K Chain of Thought87.359.864.777.478.180.494.2
MATH29.424.528.523.342.630.856.9
Code GenerationHumanEval59.134.137.860.462.264.479.9
MBPP70.351.560.267.777.873.286.7
Average72.459.967.767.172.773.283.8

The authors take a closer look at different categories across 80 public benchmark datasets at the table below:

CategoryPhi-3-Small-128K-InstructGemma-7BMixtral 8x7BLlama-3-8B-InstructGPT-3.5-Turbo-1106Gemini ProGPT-4-Turbo-1106 (Chat)
Popular aggregated benchmark70.659.466.259.967.067.580.5
Reasoning80.369.177.075.778.380.489.3
Language understanding67.458.464.965.470.475.381.6
Code generation60.045.652.756.470.466.776.1
Math48.135.840.341.152.850.967.1
Factual knowledge41.746.758.643.163.454.645.9
Multilingual62.663.263.465.069.176.582.0
Robustness68.738.451.064.569.369.784.6

Overall, the Phi-3 Small-128K-Instruct with only 7B-param achieves a similar level of language understanding and math as much larger models. Moreover, the model outperforms bigger models in reasoning capability and only behind GPT-4-Turbo. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much world knowledge, which can be seen for example with low performance on TriviaQA. However, the authors believe such weakness can be resolved by augmenting Phi-3-Small with a search engine.

Long Context

Phi-3 Small-128K-Instruct supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document question answer (QA). The authors see with just 7B params the Phi-3 Small outperforms models with the same parameters size, and competitive with model on a much bigger size such as Mixtral 8x7B.

BenchmarkPhi-3 Small-128K-InstructMistral-7BMixtral 8x7BLLaMA-3-8B-InstructGemini ProGPT-4-Turbo-1106 (Chat)
GovReport23.24.920.310.325.126.2
QMSum18.415.520.62.922.723.5
Qasper19.723.526.68.141.442.3
SQuALITY22.414.716.2252326.1
SummScreenFD11.09.311.35.116.219
Average18.913.619.010.325.727.4

Usage

Input formats

Given the nature of the training data, the Phi-3 Small-128K-Instruct model is best suited for prompts using the chat format as follows:

<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>

Loading the model locally

After obtaining the Phi-3 Small-128K-Instruct model checkpoints, users can use this sample code for inference.

import torch

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

torch.random.manual_seed(0)

model_id = "microsoft/Phi-3-small-128k-instruct"

model = AutoModelForCausalLM.from_pretrained(

model_id,

device_map="cuda",

torch_dtype="auto",

trust_remote_code=True,

)

tokenizer = AutoTokenizer.from_pretrained(model_id)

messages = [

{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},

{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},

{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},

]

pipe = pipeline(

"text-generation",

model=model,

tokenizer=tokenizer,

)

generation_args = {

"max_new_tokens": 500,

"return_full_text": False,

"temperature": 0.0,

"do_sample": False,

}

output = pipe(messages, \*\*generation_args)

print(output[0]['generated_text'])

Cross Platform Support

ONNX runtime now supports Phi3 small models across platforms and hardware.

Optimized phi-3 models are also published here in ONNX format. The ONNX models provided run with ONNX Runtime on GPU across server platforms. Support for DML (for Windows GPU), CPU, and mobile variants will be added later.

Here are some of the optimized configurations the authors have added:

  1. ONNX model for fp16 CUDA

  2. ONNX model for int4 CUDA: Quantized to int4 via RTN

Responsible AI Considerations

Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:

  • Quality of Service: The Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.

  • Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.

  • Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.

  • Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.

  • Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, the authors strongly recommend users manually verify all API uses.

Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:

  • Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.

  • High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.

  • Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).

  • Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.

  • Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.

Appendix A: Benchmark Methodology

The authors include a brief word on methodology here - and in particular, how the authors think about optimizing prompts.

In an ideal world, the authors would never change any prompts in the benchmarks to ensure it is always an apples-to-apples comparison when comparing different models. Indeed, this is the default approach, and is the case in the vast majority of models the authors have run to date.

There are, however, some exceptions to this. In some cases, the authors see a model that performs worse than expected on a given evaluation due to a failure to respect the output format. For example:

  • A Claude model may refuse to answer questions (for no apparent reason), or in coding tasks models may prefix their response with “Sure, I can help with that. …” which may break the parser. In such cases, the authors have opted to try different system messages (e.g. “You must always respond to a question” or “Get to the point!”).

  • With LLaMA-1 models, the authors observed that few shots actually hurt model performance. In this case the authors did allow running the benchmarks with 0-shots for all cases.

  • The authors have tools to convert between chat and completions APIs. When converting a chat prompt to a completion prompt, some models have different keywords e.g. Human vs User. In these cases, the authors do allow for model-specific mappings for chat to completion prompts.

However, the authors do not:

  • Pick different few-shot examples. Few shots will always be the same when comparing different models.

  • Change prompt format: e.g. if it is an A/B/C/D multiple choice, the authors do not tweak this to 1/2/3/4 multiple choice.