Skip to content

Commit

Permalink
Merge pull request ChiaXinLiang#28 from weian312/dev
Browse files Browse the repository at this point in the history
remove duplicate
  • Loading branch information
ChiaXinLiang authored Oct 6, 2024
2 parents ae9e8a2 + 0e6f774 commit b6488be
Showing 1 changed file with 0 additions and 3 deletions.
3 changes: 0 additions & 3 deletions MLLM_latex/chapter4/chapter4.tex
Original file line number Diff line number Diff line change
Expand Up @@ -86,9 +86,6 @@ \subsection{Multitask Fine-Tuning}

In some cases, models are fine-tuned on multiple tasks simultaneously, a technique known as \textbf{multitask learning}. This helps the model generalize better across various related tasks. For example, a model might be fine-tuned on both image captioning and visual question answering datasets simultaneously, allowing it to perform well in both scenarios.

\subsection{Cross-Modal Tasks}

Fine-tuning is essential for tasks that require the model to reason across modalities, such as cross-modal retrieval or referring expression comprehension (where the model must identify specific objects in an image based on a text description). The goal is to align the visual and textual representations effectively during this phase.

\subsection{Cross-Modal Tasks}

Expand Down

0 comments on commit b6488be

Please sign in to comment.