-
Notifications
You must be signed in to change notification settings - Fork 33
[Finetune] Integrate Chat template #178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the work! Summarized the changes to update as we discussed offline:
- Remove the added
is_base_model
parameter in the finetuning yaml file. - Allow user configuring
chat_template
in the yaml file. In most of the case, people don't configure it. Priority order: user configured chat_template > model's chat_template > our default template - Write the default template by following other models' (such as llama2 chat), that is, check roles in the message, etc.
- The original data format needs to convert to chat format first, before applying the chat template.
- Add unit tests to test the result after applying chat template, covering all use cases.
- Support chat format as finetuning dataset format. Please follow openAI's format. We can support this in a separate PR.
3e6ccac
to
6a0bf63
Compare
@@ -15,6 +15,7 @@ The following are the parameters supported in the finetuning workflow. | |||
|lora_config|task_type: CAUSAL_LM<br>r: 8<br>lora_alpha: 32<br>lora_dropout: 0.1|Will be passed to the LoraConfig `__init__()` method, then it'll be used as config to build Peft model object.| | |||
|deltatuner_config|"algo": "lora"<br>"denas": True<br>"best_model_structure": "/path/to/best_structure_of_deltatuner_model"|Will be passed to the DeltaTunerArguments `__init__()` method, then it'll be used as config to build [Deltatuner model](https://github.com/intel/e2eAIOK/tree/main/e2eAIOK/deltatuner) object.| | |||
|enable_gradient_checkpointing|False|enable gradient checkpointing to save GPU memory, but will cost more compute runtime| | |||
|chat_template|None|User-defined chat template.| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you compared the impact of different templates on fine-tuning performance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not yet
42825d3
to
4fa89cc
Compare
Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
2. modify chat template Signed-off-by: minmingzhu <minming.zhu@intel.com>
2. add unit test Signed-off-by: minmingzhu <minming.zhu@intel.com>
* update * fix blocking * update Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com> * update Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com> * fix setup and getting started Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com> * update Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com> * update Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com> * nit Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com> * Add dependencies for tests and update pyproject.toml Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com> * Update dependencies and test workflow Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com> * Update dependencies and fix torch_dist.py Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com> * Update OpenAI SDK installation and start ray cluster Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com> --------- Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>
* single test * single test * single test * single test * fix hang error
* use base model mpt-7b instead of mpt-7b-chat Signed-off-by: minmingzhu <minming.zhu@intel.com> * manual setting specify tokenizer Signed-off-by: minmingzhu <minming.zhu@intel.com> * update Signed-off-by: minmingzhu <minming.zhu@intel.com> * update doc/finetune_parameters.md Signed-off-by: minmingzhu <minming.zhu@intel.com> --------- Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
No description provided.