Skip to content

Commit

Permalink
Fix task guide formatting (huggingface#21409)
Browse files Browse the repository at this point in the history
fix formatting
  • Loading branch information
stevhliu authored and miyu386 committed Feb 9, 2023
1 parent af68270 commit 623c90b
Show file tree
Hide file tree
Showing 9 changed files with 9 additions and 0 deletions.
1 change: 1 addition & 0 deletions docs/source/en/tasks/audio_classification.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -193,6 +193,7 @@ Your `compute_metrics` function is ready to go now, and you'll return to it when
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!

</Tip>

You're ready to start training your model now! Load Wav2Vec2 with [`AutoModelForAudioClassification`] along with the number of expected labels, and the label mappings:

```py
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/tasks/language_modeling.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -209,6 +209,7 @@ Use the end-of-sequence token as the padding token and set `mlm=False`. This wil
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the [basic tutorial](../training#train-with-pytorch-trainer)!

</Tip>

You're ready to start training your model now! Load DistilGPT2 with [`AutoModelForCausalLM`]:

```py
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/tasks/masked_language_modeling.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -203,6 +203,7 @@ Use the end-of-sequence token as the padding token and specify `mlm_probability`
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!

</Tip>

You're ready to start training your model now! Load DistilRoBERTa with [`AutoModelForMaskedLM`]:

```py
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/tasks/multiple_choice.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -241,6 +241,7 @@ Your `compute_metrics` function is ready to go now, and you'll return to it when
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!

</Tip>

You're ready to start training your model now! Load BERT with [`AutoModelForMultipleChoice`]:

```py
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/tasks/question_answering.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,7 @@ Now create a batch of examples using [`DefaultDataCollator`]. Unlike other data
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!

</Tip>

You're ready to start training your model now! Load DistilBERT with [`AutoModelForQuestionAnswering`]:

```py
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/tasks/sequence_classification.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -155,6 +155,7 @@ Before you start training your model, create a map of the expected ids to their
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!

</Tip>

You're ready to start training your model now! Load DistilBERT with [`AutoModelForSequenceClassification`] along with the number of expected labels, and the label mappings:

```py
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/tasks/summarization.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -176,6 +176,7 @@ Your `compute_metrics` function is ready to go now, and you'll return to it when
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!

</Tip>

You're ready to start training your model now! Load T5 with [`AutoModelForSeq2SeqLM`]:

```py
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/tasks/token_classification.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -261,6 +261,7 @@ Before you start training your model, create a map of the expected ids to their
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!

</Tip>

You're ready to start training your model now! Load DistilBERT with [`AutoModelForTokenClassification`] along with the number of expected labels, and the label mappings:

```py
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/tasks/translation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -185,6 +185,7 @@ Your `compute_metrics` function is ready to go now, and you'll return to it when
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!

</Tip>

You're ready to start training your model now! Load T5 with [`AutoModelForSeq2SeqLM`]:

```py
Expand Down

0 comments on commit 623c90b

Please sign in to comment.