Open
Description
Huggingface has a library called PEFT that provides various methods for efficient fine tuning of foundation models. One of the methods is LoRA. During fine tuning, special lower rank weight matrices are updated instead of the full weight matrices in the base model. The rest of the model is frozen. The special matrices are combined with the frozen matrices in the base model. This greatly reduces the number of computations that are needed during fine tuning.
The goal of this issue is to successfully fine tune NLLB using the LoRA capability in PEFT. This will allow us to greatly reduce the resources that are needed to fine tune NLLB and potentially allow us to try fine tuning larger models.
Metadata
Metadata
Assignees
Type
Projects
Status
🏗 In progress