-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Trainer] Support skip data intervals #8989
[Trainer] Support skip data intervals #8989
Conversation
Thanks for your contribution! |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #8989 +/- ##
===========================================
- Coverage 53.26% 53.24% -0.02%
===========================================
Files 652 652
Lines 105587 105639 +52
===========================================
+ Hits 56237 56252 +15
- Misses 49350 49387 +37 ☔ View full report in Codecov by Sentry. |
paddlenlp/trainer/trainer.py
Outdated
|
||
if args.recompute: | ||
if not args.debug_data: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
额,debug_data 是模型啥的都不跑是吗?
这个有必要对完暴露吗?还是开发完了,删掉?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
对,debug_data就是只打印数据不加载模型,而且也不训练,这里是想作为一个通用功能加进来。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
如果只是我们内部使用的debug模式的话,我感觉加的意义不是很大。
paddlenlp/trainer/trainer.py
Outdated
# Skip data | ||
if should_skip_data(self.state.global_step, self.args.skip_data_intervals): | ||
logger.warning(f"Skip data at global step {self.state.global_step+1}, sub step {step_control}") | ||
logger.warning(f"{self.tokenizer.batch_decode(inputs['input_ids'], skip_special_tokens=True)}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个就不要加了吧
logger.warning(f"{self.tokenizer.batch_decode(inputs['input_ids'], skip_special_tokens=True)}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个warning是用来打印跳过的数据的,如果去掉的话也是OK的,这里主要是想让用户知道跳过的数据都是啥。
self.state.global_step += 1 | ||
self.state.epoch = epoch + (step + 1) / steps_in_epoch | ||
self.control = self.callback_handler.on_step_end(args, self.state, self.control) | ||
self._maybe_log_save_evaluate(tr_loss, model, epoch, ignore_keys_for_eval, inputs=inputs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个也不需要了吧?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_maybe_log_save_evaluate这里是为了去走:
1.tr_loss的重置:
PaddleNLP/paddlenlp/trainer/trainer.py
Line 1308 in 48820cb
tr_loss.subtract_(tr_loss) |
2._globalstep_last_logged的更新:
PaddleNLP/paddlenlp/trainer/trainer.py
Line 1346 in 48820cb
self._globalstep_last_logged = self.state.global_step |
3.正常的eval流程。不然最后eval计算consumed_samples的时候会有问题https://github.com/PaddlePaddle/PaddleNLP/blob/48820cbc1fe986004f817c0517886735675732d2/paddlenlp/trainer/trainer.py#L2792C6-L2797C18
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我主要的担心的是,skip数据的时候,碰到了eval 或 者 save 等各种各样的call back 是否有问题。
还是说,我们这里可以只处理数据,其他一律不触发。当然 step之类的更新加上。
step_control += 1 | ||
if self.control.should_epoch_stop or self.control.should_training_stop: | ||
break | ||
self.timers and self.timers("read-data").start() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我感觉很多东西你可能不需要啊,没有计算的话,一些call_back 触发不知道有没有问题?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里是为了进行一些判断,比如是否应该进行eval、save和停止训练。没有经过前反向计算直接执行callback我测试的时候没有报错,不过可能确实会有一些没测试到的潜在风险。
https://github.com/PaddlePaddle/PaddleNLP/blob/48820cbc1fe986004f817c0517886735675732d2/paddlenlp/trainer/trainer_callback.py#L432C1-L460C23
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* support skip data intervals * add debug_data arg * fix loss compute * remove callback while skip data * remove debug data * add callback_handler * remove debug_data * fix conflict
* support skip data intervals * add debug_data arg * fix loss compute * remove callback while skip data * remove debug data * add callback_handler * remove debug_data * fix conflict
PR types
New Feature
PR changes
Support skip data intervals
Description
New training_arg skip_data_intervals, refer to the data intervals to skip, the training process will pass the data from start global step to end global step at each interval.