Skip to content

Commit 64b0c2e

Browse files
authored
Update README.md
1 parent 2b7c073 commit 64b0c2e

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -70,10 +70,10 @@ We can distribute and run this function (e.g. on 2 machines x 2 GPUs) using **`t
7070

7171
```python
7272
import logging
73-
logging.basicConfig(level=logging.INFO)
74-
7573
import torchrunx
7674

75+
logging.basicConfig(level=logging.INFO)
76+
7777
launcher = torchrunx.Launcher(
7878
hostnames = ["localhost", "second_machine"], # or IP addresses
7979
workers_per_host = 2 # e.g. number of GPUs per host
@@ -93,7 +93,7 @@ trained_model: nn.Module = results.rank(0)
9393
# or: results.index(hostname="localhost", local_rank=0)
9494

9595
# and continue your script
96-
torch.save(trained_model.state_dict(), "output/model.pth")
96+
torch.save(trained_model.state_dict(), "outputs/model.pth")
9797
```
9898

9999
**See more examples where we fine-tune LLMs using:**

0 commit comments

Comments
 (0)