Skip to content

Commit 93a9463

Browse files
ricalanisstevhliu
authored andcommitted
Update falcon model card (huggingface#37184)
* feat: updated model card for falcon * fix:rewrite model description * fix: add link to conversion script * Update docs/source/en/model_doc/falcon.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/falcon.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/falcon.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/falcon.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/falcon.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/falcon.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/falcon.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/falcon.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * fix: Add suggested changes * fix: typo in link for quantization * Update docs/source/en/model_doc/falcon.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/falcon.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * fix: fix indent and close ticks * fix: add indent --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
1 parent 4cba52c commit 93a9463

File tree

1 file changed

+94
-31
lines changed

1 file changed

+94
-31
lines changed

docs/source/en/model_doc/falcon.md

Lines changed: 94 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -14,48 +14,113 @@ rendered properly in your Markdown viewer.
1414
1515
-->
1616

17+
<div style="float: right;">
18+
<div class="flex flex-wrap space-x-1">
19+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
20+
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
21+
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
22+
</div>
23+
</div>
24+
1725
# Falcon
1826

19-
<div class="flex flex-wrap space-x-1">
20-
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
21-
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
22-
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
23-
</div>
27+
[Falcon](https://huggingface.co/papers/2311.16867) is a family of large language models, available in 7B, 40B, and 180B parameters, as pretrained and instruction tuned variants. This model focuses on scaling pretraining over three categories, performance, data, and hardware. Falcon uses multigroup attention to significantly reduce inference memory requirements and rotary positional embeddings (RoPE). These models are pretrained on [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality and deduplicated 5T token dataset.
28+
29+
You can find all the original Falcon checkpoints under the [Falcon](https://huggingface.co/collections/tiiuae/falcon-64fb432660017eeec9837b5a) collection.
30+
31+
> [!TIP]
32+
> Click on the Falcon models in the right sidebar for more examples of how to apply Falcon to different language tasks.
33+
34+
The example below demonstrates how to generate text with [`Pipeline`], [`AutoModel`], and from the command line.
35+
36+
<hfoptions id="usage">
37+
<hfoption id="Pipeline">
38+
39+
```py
40+
import torch
41+
from transformers import pipeline
42+
43+
pipeline = pipeline(
44+
task="text-generation",
45+
model="tiiuae/falcon-7b-instruct",
46+
torch_dtype=torch.bfloat16,
47+
device=0
48+
)
49+
pipeline(
50+
"Write a short poem about coding",
51+
max_length=100,
52+
do_sample=True,
53+
temperature=0.7
54+
)
55+
```
2456

25-
## Overview
57+
</hfoption>
58+
<hfoption id="AutoModel">
2659

27-
Falcon is a class of causal decoder-only models built by [TII](https://www.tii.ae/). The largest Falcon checkpoints
28-
have been trained on >=1T tokens of text, with a particular emphasis on the [RefinedWeb](https://arxiv.org/abs/2306.01116)
29-
corpus. They are made available under the Apache 2.0 license.
60+
```py
61+
import torch
62+
from transformers import AutoTokenizer, AutoModelForCausalLM
3063

64+
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b-instruct")
65+
model = AutoModelForCausalLM.from_pretrained(
66+
"tiiuae/falcon-7b-instruct",
67+
torch_dtype=torch.bfloat16,
68+
device_map="auto",
69+
attn_implementation="sdpa",
70+
)
3171

32-
Falcon's architecture is modern and optimized for inference, with multi-query attention and support for efficient
33-
attention variants like `FlashAttention`. Both 'base' models trained only as causal language models as well as
34-
'instruct' models that have received further fine-tuning are available.
72+
input_ids = tokenizer("Write a short poem about coding", return_tensors="pt").to("cuda")
3573

74+
output = model.generate(**input_ids)
75+
print(tokenizer.decode(output[0], skip_special_tokens=True))
76+
```
3677

37-
Falcon models are (as of 2023) some of the largest and most powerful open-source language models,
38-
and consistently rank highly in the [OpenLLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
78+
</hfoption>
79+
<hfoption id="transformers-cli">
3980

40-
## Converting custom checkpoints
81+
```bash
82+
# pip install -U flash-attn --no-build-isolation
83+
transformers-cli chat --model_name_or_path tiiuae/falcon-7b-instruct --torch_dtype auto --attn_implementation flash_attention_2 --device 0
84+
```
4185

42-
<Tip>
86+
</hfoption>
87+
</hfoptions>
4388

44-
Falcon models were initially added to the Hugging Face Hub as custom code checkpoints. However, Falcon is now fully
45-
supported in the Transformers library. If you fine-tuned a model from a custom code checkpoint, we recommend converting
46-
your checkpoint to the new in-library format, as this should give significant improvements to stability and
47-
performance, especially for generation, as well as removing the need to use `trust_remote_code=True`!
89+
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
4890

49-
</Tip>
91+
The example below uses [bitsandbytes](../quantization/bitsandbytes) to only quantize the weights to 4-bits.
5092

51-
You can convert custom code checkpoints to full Transformers checkpoints using the `convert_custom_code_checkpoint.py`
52-
script located in the
53-
[Falcon model directory](https://github.com/huggingface/transformers/tree/main/src/transformers/models/falcon)
54-
of the Transformers library. To use this script, simply call it with
55-
`python convert_custom_code_checkpoint.py --checkpoint_dir my_model`. This will convert your checkpoint in-place, and
56-
you can immediately load it from the directory afterwards with e.g. `from_pretrained()`. If your model hasn't been
57-
uploaded to the Hub, we recommend making a backup before attempting the conversion, just in case!
93+
```python
94+
import torch
95+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
5896

97+
quantization_config = BitsAndBytesConfig(
98+
load_in_4bit=True,
99+
bnb_4bit_compute_dtype=torch.bfloat16,
100+
bnb_4bit_quant_type="nf4",
101+
bnb_4bit_use_double_quant=True,
102+
)
103+
104+
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b")
105+
model = AutoModelForCausalLM.from_pretrained(
106+
"tiiuae/falcon-7b",
107+
torch_dtype=torch.bfloat16,
108+
device_map="auto",
109+
quantization_config=quantization_config,
110+
)
111+
112+
inputs = tokenizer("In quantum physics, entanglement means", return_tensors="pt").to("cuda")
113+
outputs = model.generate(**inputs, max_new_tokens=100)
114+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
115+
```
116+
117+
## Notes
118+
119+
- If you're upgrading from an older custom code checkpoint, remember to convert it to the official Transformers format for better stability and performance using the conversion script located in the [Falcon model directory](https://github.com/huggingface/transformers/tree/main/src/transformers/models/falcon).
120+
121+
```bash
122+
python convert_custom_code_checkpoint.py --checkpoint_dir my_model
123+
```
59124

60125
## FalconConfig
61126

@@ -85,6 +150,4 @@ uploaded to the Hub, we recommend making a backup before attempting the conversi
85150
## FalconForQuestionAnswering
86151

87152
[[autodoc]] FalconForQuestionAnswering
88-
- forward
89-
90-
153+
- forward

0 commit comments

Comments
 (0)