Skip to content

Conversation

@behroozazarkhalili
Copy link
Collaborator

Summary

This PR adds a PEFT subsection to the "Reducing Memory Usage" documentation, addressing issue #4383.

Changes

  • New PEFT Section: Added comprehensive section explaining how PEFT methods like LoRA reduce memory usage by training only adapter parameters instead of all model weights
  • Code Example: Included minimal LoRA configuration example with SFTTrainer
  • Integration Links: Added references to the comprehensive PEFT Integration guide for detailed information
  • Quantization Note: Mentioned compatibility with quantization techniques for additional memory savings
  • Logical Positioning: Placed between Packing and Liger sections for natural document flow

Implementation Details

  • Follows existing documentation patterns with brief explanation + code example + link to detailed docs
  • Code example verified against official TRL PEFT examples
  • Positioned logically in the memory reduction techniques hierarchy
  • Consistent with document's writing style and structure

Testing

  • Verified link to peft_integration.md exists and is accessible
  • Code pattern matches verified examples from docs/source/peft_integration.md and docs/source/speeding_up_training.md
  • Reviewed placement for logical flow within existing content

Resolves #4383

Resolves huggingface#4382

- Add Flash Attention 2 section with minimal example and link to reducing_memory_usage
- Add PEFT integration section with LoRA example and link to peft_integration
- Add Liger Kernel section with example and link to liger_kernel_integration
- Add Gradient Checkpointing section with example and link to Transformers guide
- Add Mixed Precision Training section with bf16/fp16 examples
- Update introduction to reflect comprehensive coverage
- All examples verified against TRL source code and official examples
Resolves huggingface#4383

- Add new PEFT section explaining parameter-efficient fine-tuning for memory reduction
- Include minimal LoRA example with SFTTrainer
- Link to comprehensive PEFT integration guide for detailed information
- Note compatibility with quantization for additional memory savings
- Position section between Packing and Liger for logical flow
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@behroozazarkhalili
Copy link
Collaborator Author

Closing this PR as the branch has been pushed directly to the upstream repository. Please see #4430 for the same changes from the upstream branch.

@behroozazarkhalili behroozazarkhalili deleted the docs/add-peft-to-reducing-memory branch November 2, 2025 21:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add PEFT subsection to "Reducing Memory Usage"

2 participants