[fix] create parallel_state before debug_rollout_only early return#642
[fix] create parallel_state before debug_rollout_only early return#642guapisolo wants to merge 1 commit intoradixark:mainfrom
Conversation
Summary of ChangesHello @guapisolo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical bug where the Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
debug_rollout_only mode calls train() which needs parallel_state for rollout data preprocessing and logging. Previously parallel_state was only created after model initialization, which is skipped in debug_rollout_only mode. Move it before the early return with model=None. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
54ec117 to
818d997
Compare
There was a problem hiding this comment.
Code Review
This pull request aims to fix a crash in debug_rollout_only mode by ensuring parallel_state is created before an early return. While the intent is correct, the current implementation introduces a critical issue: it initializes parallel_state with model=None for all execution paths. This will cause a crash when virtual pipeline parallelism (VPP) is used, as the model object is required in that scenario. My review provides comments and a suggestion to correctly scope the fix to the debug mode while preserving the necessary logic for the standard execution path.
I am having trouble creating individual review comments. Click here to see my feedback.
miles/backends/megatron_utils/actor.py (83-87)
While the intent to initialize parallel_state for debug_rollout_only is correct, creating it here unconditionally with model=None will cause a crash if virtual pipeline parallelism (VPP) is used, as create_megatron_parallel_state requires a model object. This initialization should only happen inside the debug_rollout_only block.
if self.args.debug_rollout_only:
# debug_rollout_only still calls train() and needs parallel metadata for
# rollout data preprocessing and logging.
self.parallel_state = create_megatron_parallel_state(model=None)
return 0
miles/backends/megatron_utils/actor.py (100)
This line should not be removed. For the normal execution path (when debug_rollout_only is false), parallel_state must be initialized here after self.model is created. This is necessary for features like virtual pipeline parallelism (VPP) which depend on the model object.
debug_rollout_only will crash if we do not construct parallel_state