From a000453039374c32cca0b1518952155b9d186d1b Mon Sep 17 00:00:00 2001
From: Alvaro Moran
Date: Wed, 22 Jan 2025 16:02:28 +0000
Subject: [PATCH] fix(doc): correct typo on training tutorial
---
docs/source/training_tutorials/sft_lora_finetune_llm.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/source/training_tutorials/sft_lora_finetune_llm.mdx b/docs/source/training_tutorials/sft_lora_finetune_llm.mdx
index f1ed6b039..0810f7169 100644
--- a/docs/source/training_tutorials/sft_lora_finetune_llm.mdx
+++ b/docs/source/training_tutorials/sft_lora_finetune_llm.mdx
@@ -124,7 +124,7 @@ If you want to know more about distributed training you can take a look at the [
-Here, we will use tensor parallelism in conjuction with LoRA.
+Here, we will use tensor parallelism in conjunction with LoRA.
Our training code will look as follows:
```python