Skip to content

PATCH: add back n-dim device-mesh + fix tp trainer saving #39693

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 18 commits into from
Jul 28, 2025
Merged

PATCH: add back n-dim device-mesh + fix tp trainer saving #39693

merged 18 commits into from
Jul 28, 2025

Conversation

S1ro1
Copy link
Member

@S1ro1 S1ro1 commented Jul 26, 2025

  1. Fixes ndim check on device_mesh - This was merged in previously with Allow device_mesh have multiple dim  #38949 but was by mistake reverted by Add ep #39501, we need this for upcoming accelerate/axolotl release.
  2. Makes sure we save properly on distributed rank if tp allowed in trainer

cc @ArthurZucker @SunMarc

@S1ro1 S1ro1 added the for patch Tag issues / labels that should be included in the next patch label Jul 26, 2025
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@S1ro1 S1ro1 changed the title PATCH: add back n-dim device-mesh PATCH: add back n-dim device-mesh + fix tp hook registration Jul 26, 2025
Copy link
Contributor

[For maintainers] Suggested jobs to run (before merge)

run-slow: llama

@S1ro1 S1ro1 changed the title PATCH: add back n-dim device-mesh + fix tp hook registration PATCH: add back n-dim device-mesh + fix tp trainer saving Jul 28, 2025
Comment on lines 4620 to 4624
if "tp" not in device_mesh.mesh_dim_names:
raise ValueError(
"When using `tp_plan`, the `device_mesh` must contain a 'tp' dimension. "
"Please provide a valid `device_mesh`."
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should enforce 'tp' in the device mesh! for inference we never use that!~

Comment on lines 4625 to 4627
device_mesh = device_mesh["tp"]
tp_size = device_mesh["tp"].size()
device_map = torch.device(f"{device_mesh.device_type}:{int(os.environ['LOCAL_RANK'])}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only do this if tp exists in it!

@@ -3953,6 +3953,13 @@ def save_model(self, output_dir: Optional[str] = None, _internal_call: bool = Fa
if IS_SAGEMAKER_MP_POST_1_10:
# 'user_content.pt' indicates model state_dict saved with smp >= 1.10
Path(os.path.join(output_dir, "user_content.pt")).touch()
# We are in N-D parallelism if we have parallelism_config set, so we check accelerate if we're on a to_save rank
elif (getattr(self.accelerator, "parallelism_config")) is not None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay!

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Comment on lines 4621 to 4626
if "tp" not in device_mesh.mesh_dim_names:
raise ValueError(
"When using `tp_plan`, the `device_mesh` must contain a 'tp' dimension. "
"Please provide a valid `device_mesh`."
)
device_mesh = device_mesh["tp"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can it be ndim > 1 but not mesh dim names?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope it can't IMO, how do you think we should take the correct submesh then?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No no I just want to be sure as the basic initialization we do is without providing a mesh name!

Copy link
Member Author

@S1ro1 S1ro1 Jul 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, that works, we check for "tp" only if ndim > 1, therefore the basic initialization still works. We check for "tp" only if mesh.ndim > 1 AND user-provided mesh. If this passes we select the correct submesh ("tp") and use that as 1D mesh afterwards, as if it was created by us in initialize_tensor_parallelism

@S1ro1
Copy link
Member Author

S1ro1 commented Jul 28, 2025

Fails unrelated, merging

@S1ro1 S1ro1 enabled auto-merge (squash) July 28, 2025 12:17
@S1ro1 S1ro1 merged commit 4c7da9f into main Jul 28, 2025
26 checks passed
@S1ro1 S1ro1 deleted the fsdp2-tp branch July 28, 2025 12:29
winglian pushed a commit to winglian/transformers that referenced this pull request Jul 28, 2025
…e#39693)

* Feat: something

* Feat: initial changes

* tmp changes to unblock

* Refactor

* remove todo

* Feat: docstring

* Fix: saving of distributed model in trainer

* Fix: distributed saving with trainer

* Feat: add pure tp saving

* Only require tp dim if ndim > 1

* Fix: default to None

* Fix: better comments/errors

* Fix: properly check tp_size attribute

* Fix: properly check for None in tp_size

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
ArthurZucker pushed a commit that referenced this pull request Jul 29, 2025
* Feat: something

* Feat: initial changes

* tmp changes to unblock

* Refactor

* remove todo

* Feat: docstring

* Fix: saving of distributed model in trainer

* Fix: distributed saving with trainer

* Feat: add pure tp saving

* Only require tp dim if ndim > 1

* Fix: default to None

* Fix: better comments/errors

* Fix: properly check tp_size attribute

* Fix: properly check for None in tp_size

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
for patch Tag issues / labels that should be included in the next patch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants