Skip to content

Conversation

BishmoyPaul
Copy link

Changes Made

This PR moves SFTTrainer arguments (max_seq_length, packing, and dataset_kwargs) to SFTConfig to match updated TRL implementation (reference: https://github.com/huggingface/trl/blob/main/trl/trainer/sft_config.py) in finetune_sft_peft.ipynb

Related: unslothai/unsloth#1264 (comment)

Notebooks Modified:


Note: This PR does NOT address the tokenizer to processing_class change, as that's already covered in PR #214. These changes complement each other but are independent. In order to run the notebook, it is advised to incorporate both the changes.

@BishmoyPaul BishmoyPaul changed the title Fix : Move max_seq_length, packing, and dataset_kwargs to SFTConfig from SFTTrainer Fix : Move max_seq_length, packing, and dataset_kwargs to SFTConfig from SFTTrainer in finetune_sft_peft notebook Apr 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant