-
Notifications
You must be signed in to change notification settings - Fork 12
Issues: foundation-model-stack/fms-acceleration
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Distributed Training Problems for QLoRA models with Transformers pre-release 4.45
#83
by achew010
was closed Oct 17, 2024
When HF Memory Metrics Disabled, the Benchmark CSV is Corrupted.
#75
by fabianlim
was closed Oct 11, 2024
Introduce a Better Dequantization Fix on Triton Function for FOAK Plugin's GPTQ Fused Operations
#52
by achew010
was closed Oct 11, 2024
Memory Consumption for GPTQ-LoRA is higher than QLoRA in Distributed Finetuning
#12
by achew010
was closed May 23, 2024
Failure in FSDP Benchmark Experiment using QLoRA with Custom Fused Modules
#3
by achew010
was closed Jun 24, 2024
ProTip!
Add no:assignee to see everything that’s not assigned.