Skip to content

Commit e1f379b

Browse files
authored
Fixing the example in generation strategy doc (#37598)
Update generation_strategies.md The prompt text shown in the example does not match what is inside the generated output. As the generated output always include the prompt, the correct prompt should be "Hugging Face is an open-source company".
1 parent 4f58fc9 commit e1f379b

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/source/en/generation_strategies.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ import torch
3131
from transformers import AutoModelForCausalLM, AutoTokenizer
3232

3333
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
34-
inputs = tokenizer("I look forward to", return_tensors="pt").to("cuda")
34+
inputs = tokenizer("Hugging Face is an open-source company", return_tensors="pt").to("cuda")
3535

3636
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", torch_dtype=torch.float16).to("cuda")
3737
# explicitly set to default length because Llama2 generation length is 4096

0 commit comments

Comments
 (0)