Skip to content

Commit 3786335

Browse files
authored
Update README.md
1 parent b1cde79 commit 3786335

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("c
7474
outputs = model.generate(inputs)
7575
print(tokenizer.decode(outputs[0]))
7676
```
77-
```python
77+
```bash
7878
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
7979
Memory footprint: 32251.33 MB
8080
```
@@ -173,4 +173,4 @@ accelerate launch finetune.py \
173173
If you want to fine-tune on other text datasets, you need to change `dataset_text_field` argument to the name of the column containing the code/text you want to train on.
174174

175175
# Evaluation
176-
To evaluate StarCoder2 and its derivatives, you can use the [BigCode-Evaluation-Harness](https://github.com/bigcode-project/bigcode-evaluation-harness) for evaluating Code LLMs.
176+
To evaluate StarCoder2 and its derivatives, you can use the [BigCode-Evaluation-Harness](https://github.com/bigcode-project/bigcode-evaluation-harness) for evaluating Code LLMs.

0 commit comments

Comments
 (0)