Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌐 Community Tutorials #2411

Merged
merged 11 commits into from
Nov 29, 2024
Prev Previous commit
Next Next commit
respond to feedback - fix links and split table
  • Loading branch information
burtenshaw committed Nov 29, 2024
commit 9a7db56816ecf01c3db627da69ed824d554731f9
23 changes: 15 additions & 8 deletions docs/source/community_notebooks.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,21 @@

Community tutorials are made by active members of the Hugging Face community that want to share their knowledge and expertise with others. They are a great way to learn about the library and its features, and to get started with core classes and modalities.

| Task | Class | Modality | Description | Author | Tutorial |
|------|--------|----------|-------------|---------|----------|
| Instruction tuning | SFTTrainer | Text | Fine-tuning Google Gemma LLMs using ChatML format with QLoRA | [Philipp Schmid](https://www.philschmid.de/fine-tune-google-gemma) | [Link](https://www.philschmid.de/fine-tune-google-gemma) |
| Structured Generation | SFTTrainer | Text | Fine-tuning Llama-2-7B to generate Persian product catalogs in JSON using QLoRA and PEFT | [Mohammadreza Esmaeilian](https://huggingface.co/learn/cookbook/en/fine_tuning_llm_to_generate_persian_product_catalogs_in_json_format) | [Link](https://huggingface.co/learn/cookbook/en/fine_tuning_llm_to_generate_persian_product_catalogs_in_json_format) |
| Preference Optimization | DPOTrainer | Text | Align Mistral-7b using Direct Preference Optimization for human preference alignment | [Maxime Labonne](https://mlabonne.github.io/blog/posts/Fine_tune_Mistral_7b_with_DPO.html) | [Link](https://mlabonne.github.io/blog/posts/Fine_tune_Mistral_7b_with_DPO.html) |
| Preference Optimization | ORPOTrainer | Text | Fine-tuning Llama 3 with ORPO combining instruction tuning and preference alignment | [Maxime Labonne](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) | [Link](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) |
| Visual QA | SFTTrainer | Vision-Text | Fine-tuning Qwen2-VL-7B for visual question answering on ChartQA dataset | [Sergio Paniego](https://huggingface.co/learn/cookbook/fine_tuning_vlm_trl) | [Link](https://huggingface.co/learn/cookbook/fine_tuning_vlm_trl) |
| SEO Description | SFTTrainer | Vision-Text | Fine-tuning Qwen2-VL-7B for generating SEO-friendly descriptions from images | [Philipp Schmid](https://www.philschmid.de/fine-tune-multimodal-llms-with-trl) | [Link](https://www.philschmid.de/fine-tune-multimodal-llms-with-trl) |
# Language Models

| Task | Class | Description | Author | Tutorial |
|------|--------|-------------|---------|----------|
| Instruction tuning | SFTTrainer | Fine-tuning Google Gemma LLMs using ChatML format with QLoRA | [Philipp Schmid](https://github.com/philschmid) | [Link](https://www.philschmid.de/fine-tune-google-gemma) |
| Structured Generation | SFTTrainer | Fine-tuning Llama-2-7B to generate Persian product catalogs in JSON using QLoRA and PEFT | [Mohammadreza Esmaeilian](https://github.com/Mrzesma) | [Link](https://huggingface.co/learn/cookbook/en/fine_tuning_llm_to_generate_persian_product_catalogs_in_json_format) |
| Preference Optimization | DPOTrainer | Align Mistral-7b using Direct Preference Optimization for human preference alignment | [Maxime Labonne](https://github.com/mlabonne) | [Link](https://mlabonne.github.io/blog/posts/Fine_tune_Mistral_7b_with_DPO.html) |
| Preference Optimization | ORPOTrainer | Fine-tuning Llama 3 with ORPO combining instruction tuning and preference alignment | [Maxime Labonne](https://github.com/mlabonne) | [Link](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) |

# Vision Language Models

| Task | Class | Description | Author | Tutorial |
|------|--------|-------------|---------|----------|
| Visual QA | SFTTrainer | Fine-tuning Qwen2-VL-7B for visual question answering on ChartQA dataset | [Sergio Paniego](https://github.com/sergiopaniego) | [Link](https://huggingface.co/learn/cookbook/fine_tuning_vlm_trl) |
| SEO Description | SFTTrainer | Fine-tuning Qwen2-VL-7B for generating SEO-friendly descriptions from images | [Philipp Schmid](https://github.com/philschmid) | [Link](https://www.philschmid.de/fine-tune-multimodal-llms-with-trl) |

## Contributing

Expand Down