[ICCVW 25] LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning
-
Updated
Aug 8, 2025 - Python
[ICCVW 25] LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning
Fine-tuning of Gemma 2 model in Google Competition using a dataset of Chinese poetry. The goal is to adapt the model to generate Chinese poetry in a classical style by training it on a subset of poems. The fine-tuning process leverages LoRA (Low-Rank Adaptation) for efficient model adaptation.
Variance-stable routing for 2-bit quantized MoE models. Features dynamic phase correction (Armen Guard), syntactic stabilization layer, and recursive residual quantization for efficient inference.
⚗️ Gemma 2 9B model instruct repository
Add a description, image, and links to the gemma-2 topic page so that developers can more easily learn about it.
To associate your repository with the gemma-2 topic, visit your repo's landing page and select "manage topics."