You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Phi-1 model was proposed in [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644) by Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee and Yuanzhi Li.
28
-
29
-
The Phi-1.5 model was proposed in [Textbooks Are All You Need II: phi-1.5 technical report](https://arxiv.org/abs/2309.05463) by Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar and Yin Tat Lee.
30
-
31
-
### Summary
32
-
33
-
In Phi-1 and Phi-1.5 papers, the authors showed how important the quality of the data is in training relative to the model size.
34
-
They selected high quality "textbook" data alongside with synthetically generated data for training their small sized Transformer
35
-
based model Phi-1 with 1.3B parameters. Despite this small scale, phi-1 attains pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP.
36
-
They follow the same strategy for Phi-1.5 and created another 1.3B parameter model with performance on natural language tasks comparable
37
-
to models 5x larger, and surpassing most non-frontier LLMs. Phi-1.5 exhibits many of the traits of much larger LLMs such as the ability
38
-
to “think step by step” or perform some rudimentary in-context learning.
39
-
With these two experiments the authors successfully showed the huge impact of quality of training data when training machine learning models.
40
-
41
-
The abstract from the Phi-1 paper is the following:
42
-
43
-
*We introduce phi-1, a new large language model for code, with significantly smaller size than
44
-
competing models: phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on
45
-
8 A100s, using a selection of “textbook quality” data from the web (6B tokens) and synthetically
46
-
generated textbooks and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
47
-
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays surprising emergent
48
-
properties compared to phi-1-base, our model before our finetuning stage on a dataset of coding
49
-
exercises, and phi-1-small, a smaller model with 350M parameters trained with the same pipeline as
50
-
phi-1 that still achieves 45% on HumanEval.*
51
-
52
-
The abstract from the Phi-1.5 paper is the following:
53
-
54
-
*We continue the investigation into the power of smaller Transformer-based language models as
55
-
initiated by TinyStories – a 10 million parameter model that can produce coherent English – and
56
-
the follow-up work on phi-1, a 1.3 billion parameter model with Python coding performance close
57
-
to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to
58
-
generate “textbook quality” data as a way to enhance the learning process compared to traditional
59
-
web data. We follow the “Textbooks Are All You Need” approach, focusing this time on common
60
-
sense reasoning in natural language, and create a new 1.3 billion parameter model named phi-1.5,
61
-
with performance on natural language tasks comparable to models 5x larger, and surpassing most
62
-
non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic
63
-
coding. More generally, phi-1.5 exhibits many of the traits of much larger LLMs, both good –such
64
-
as the ability to “think step by step” or perform some rudimentary in-context learning– and bad,
65
-
including hallucinations and the potential for toxic and biased generations –encouragingly though, we
66
-
are seeing improvement on that front thanks to the absence of web data. We open-source phi-1.5 to
67
-
promote further research on these urgent topics.*
68
-
69
-
This model was contributed by [Susnato Dhar](https://huggingface.co/susnato).
70
-
71
-
The original code for Phi-1, Phi-1.5 and Phi-2 can be found [here](https://huggingface.co/microsoft/phi-1), [here](https://huggingface.co/microsoft/phi-1_5) and [here](https://huggingface.co/microsoft/phi-2), respectively.
72
-
73
-
## Usage tips
74
-
75
-
- This model is quite similar to `Llama` with the main difference in [`PhiDecoderLayer`], where they used [`PhiAttention`] and [`PhiMLP`] layers in parallel configuration.
76
-
- The tokenizer used for this model is identical to the [`CodeGenTokenizer`].
77
-
78
-
## How to use Phi-2
79
-
80
-
<Tipwarning={true}>
24
+
# Phi
81
25
82
-
Phi-2 has been integrated in the development version (4.37.0.dev) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
26
+
[Phi](https://huggingface.co/papers/2306.11644) is a 1.3B parameter transformer model optimized for Python code generation. It focuses on "textbook-quality" training data of code examples, exercises and synthetic Python problems rather than scaling the model size or compute.
83
27
84
-
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
28
+
You can find all the original Phi checkpoints under the [Phi-1](https://huggingface.co/collections/microsoft/phi-1-6626e29134744e94e222d572) collection.
85
29
86
-
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
30
+
> [!TIP]
31
+
> Click on the Phi models in the right sidebar for more examples of how to apply Phi to different language tasks.
87
32
88
-
</Tip>
33
+
The example below demonstrates how to generate text with [`Pipeline`], [`AutoModel`] and from the command line.
>>> inputs = tokenizer('Can you help me write a formal email to a potential business partner proposing a joint venture?', return_tensors="pt", return_attention_mask=False)
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature.
67
+
</hfoption>
68
+
<hfoption id="transformers-cli">
131
69
132
70
```bash
133
-
pip install -U flash-attn --no-build-isolation
71
+
echo -e "'''def print_prime(n): """ Print all primes between 1and n"""'''" | transformers-cli run --task text-classification --model microsoft/phi-1.5 --device 0
134
72
```
135
73
136
-
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``)
74
+
</hfoption>
75
+
</hfoptions>
137
76
138
-
To load and run a model using Flash Attention 2, refer to the snippet below:
77
+
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `microsoft/phi-1` checkpoint and the Flash Attention 2 version of the model using a sequence length of 2048.
- If you're using Transformers < 4.37.0.dev, set `trust_remote_code=True` in [`~AutoModel.from_pretrained`]. Otherwise, make sure you update Transformers to the latest stable version.
101
+
102
+
```py
103
+
import torch
104
+
from transformers import AutoTokenizer, AutoModelForCausalLM
0 commit comments