This repository contains experiments on fine-tuning LLMs (Llama, Llama3.1, Gemma). It includes notebooks for model tuning, data preprocessing, and hyperparameter optimization to enhance model performance.
python adapter transformer llama lora stf gemma fine-tuning colab-notebook dpo huggingface llm qlora peft-fine-tuning-llm unsloth
-
Updated
Apr 10, 2025 - Jupyter Notebook