Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌐 Community Tutorials #2411

Merged
merged 11 commits into from
Nov 29, 2024
2 changes: 2 additions & 0 deletions docs/source/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,8 @@
title: Text Environments
title: API
- sections:
- local: community_notebooks
title: Community Tutorials
burtenshaw marked this conversation as resolved.
Show resolved Hide resolved
- local: example_overview
title: Example Overview
- local: sentiment_tuning
Expand Down
16 changes: 16 additions & 0 deletions docs/source/community_notebooks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Community Tutorials

Community tutorials are made by active members of the Hugging Face community that want to share their knowledge and expertise with others. They are a great way to learn about the library and its features, and to get started with core classes and modalities.

| Task | Class | Modality | Description | Author | Tutorial |
|------|--------|----------|-------------|---------|----------|
burtenshaw marked this conversation as resolved.
Show resolved Hide resolved
| Instruction tuning | SFTTrainer | Text | Fine-tuning Google Gemma LLMs using ChatML format with QLoRA | [Philipp Schmid](https://www.philschmid.de/fine-tune-google-gemma) | [Link](https://www.philschmid.de/fine-tune-google-gemma) |
burtenshaw marked this conversation as resolved.
Show resolved Hide resolved
| Structured Generation | SFTTrainer | Text | Fine-tuning Llama-2-7B to generate Persian product catalogs in JSON using QLoRA and PEFT | [Mohammadreza Esmaeilian](https://huggingface.co/learn/cookbook/en/fine_tuning_llm_to_generate_persian_product_catalogs_in_json_format) | [Link](https://huggingface.co/learn/cookbook/en/fine_tuning_llm_to_generate_persian_product_catalogs_in_json_format) |
| Preference Optimization | DPOTrainer | Text | Align Mistral-7b using Direct Preference Optimization for human preference alignment | [Maxime Labonne](https://mlabonne.github.io/blog/posts/Fine_tune_Mistral_7b_with_DPO.html) | [Link](https://mlabonne.github.io/blog/posts/Fine_tune_Mistral_7b_with_DPO.html) |
| Preference Optimization | ORPOTrainer | Text | Fine-tuning Llama 3 with ORPO combining instruction tuning and preference alignment | [Maxime Labonne](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) | [Link](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) |
| Visual QA | SFTTrainer | Vision-Text | Fine-tuning Qwen2-VL-7B for visual question answering on ChartQA dataset | [Sergio Paniego](https://huggingface.co/learn/cookbook/fine_tuning_vlm_trl) | [Link](https://huggingface.co/learn/cookbook/fine_tuning_vlm_trl) |
| SEO Description | SFTTrainer | Vision-Text | Fine-tuning Qwen2-VL-7B for generating SEO-friendly descriptions from images | [Philipp Schmid](https://www.philschmid.de/fine-tune-multimodal-llms-with-trl) | [Link](https://www.philschmid.de/fine-tune-multimodal-llms-with-trl) |

## Contributing

If you have a tutorial that you would like to add to this list, please open a PR to add it. We will review it and merge it if it is relevant to the community.