Open
Description
Issue encountered
We need better documentation for locally trained LLMs. I am experimenting with finetunes and LoRA adapters but I am having a bad time ironing out the required values for the yaml config.
Solution/Feature
Provide working examples for adapter configurations.
Possible alternatives
Make the library more coherent? --override-batch-size no longer works but it is still available under lighteval accelerate
.