Skip to content

Commit 0cd085b

Browse files
authored
Update training_methods.md - Change compute requirement suggestions (#1245)
1 parent b2b9812 commit 0cd085b

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/user_guides/train/training_methods.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@ Here's a quick comparison:
99

1010
| Method | Use Case | Data Required | Compute | Key Features |
1111
|--------|----------|---------------|---------|--------------|
12-
| [Supervised Fine-Tuning (SFT)](#supervised-fine-tuning-sft) | Task adaptation | Input-output pairs | Moderate | Fine-tunes pre-trained models on specific tasks by providing labeled conversations. |
13-
| [Vision-Language SFT](#vision-language-sft) | Multimodal tasks | Image-text pairs | High | Extends SFT to handle both images and text, enabling image understanding problems. |
12+
| [Supervised Fine-Tuning (SFT)](#supervised-fine-tuning-sft) | Task adaptation | Input-output pairs | Low | Fine-tunes pre-trained models on specific tasks by providing labeled conversations. |
13+
| [Vision-Language SFT](#vision-language-sft) | Multimodal tasks | Image-text pairs | Moderate | Extends SFT to handle both images and text, enabling image understanding problems. |
1414
| [Pretraining](#pretraining) | Domain adaptation | Raw text | Very High | Trains a language model from scratch or adapts it to a new domain using large amounts of unlabeled text. |
15-
| [Direct Preference Optimization (DPO)](#direct-preference-optimization-dpo) | Preference learning | Preference pairs | Moderate | Trains a model to align with human preferences by providing pairs of preferred and rejected outputs. |
15+
| [Direct Preference Optimization (DPO)](#direct-preference-optimization-dpo) | Preference learning | Preference pairs | Low | Trains a model to align with human preferences by providing pairs of preferred and rejected outputs. |
1616

1717
(supervised-fine-tuning-sft)=
1818
## Supervised Fine-Tuning (SFT)

0 commit comments

Comments
 (0)