Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug fix for the "Link bit16 and fp32 parameters in partition" #5681

Merged
merged 4 commits into from
Jun 26, 2024

Conversation

U-rara
Copy link
Contributor

@U-rara U-rara commented Jun 18, 2024

In the function _link_all_hp_params link:

def _link_all_hp_params(self):
    dp_world_size = dist.get_world_size(group=self.dp_process_group)
    if self.cpu_offload:
        self._get_offload_gradient_dict()

    for i, _ in enumerate(self.optimizer.param_groups):
        # Link bit16 and fp32 params in partition
        partition_id = dist.get_rank(group=self.real_dp_process_group[i])
        partition_size = self.bit16_groups_flat[i].numel() // dp_world_size
        flat_hp_partition = self.single_partition_of_fp32_groups[i]
        link_hp_params(lp_param_list=self.bit16_groups[i],
                       flat_hp_partition=flat_hp_partition,
                       gradient_dict=self.averaged_gradients,
                       offload_gradient_dict=self.offload_gradient_dict,
                       use_offload=self.cpu_offload,
                       param_group_index=i,
                       partition_start=partition_id * partition_size,
                       partition_size=partition_size,
                       dp_group=self.real_dp_process_group[i])

dp_world_size = dist.get_world_size(group=self.dp_process_group) ensures that dp_world_size is always the global data parallel word size.
However, for the MoEs parameter group, the line partition_size = self.bit16_groups_flat[i].numel() // dp_world_size results in an incorrect partition_size when ep_size > 1 (when expert parallelism is enabled).
This causes only some of the MoEs parameters to be correctly executed in link_hp_params link, while the remaining parameters have _hp_mapping set to None.
Consequently, this leads to some parameters not being mapped in self._param_slice_mappings = self._create_param_mapping(), which directly causes errors in storing the optimizer state file for MoEs parameters.

To fix this bug, we need to use the correct dp_world_size for each parameter group:

    def _link_all_hp_params(self):
        if self.cpu_offload:
            self._get_offload_gradient_dict()

        for i, _ in enumerate(self.optimizer.param_groups):
            # Link bit16 and fp32 params in partition
            partition_id = dist.get_rank(group=self.real_dp_process_group[i])
            partition_size = self.bit16_groups_flat[i].numel() // dist.get_world_size(group=self.real_dp_process_group[i]) # <--
            flat_hp_partition = self.single_partition_of_fp32_groups[i]
            link_hp_params(lp_param_list=self.bit16_groups[i],
                           flat_hp_partition=flat_hp_partition,
                           gradient_dict=self.averaged_gradients,
                           offload_gradient_dict=self.offload_gradient_dict,
                           use_offload=self.cpu_offload,
                           param_group_index=i,
                           partition_start=partition_id * partition_size,
                           partition_size=partition_size,
                           dp_group=self.real_dp_process_group[i])

@tjruwase tjruwase requested review from tohtana and removed request for mrwyattii June 18, 2024 14:01
@U-rara U-rara requested a review from tjruwase June 18, 2024 16:11
@U-rara
Copy link
Contributor Author

U-rara commented Jun 18, 2024

@tjruwase Thank you for your previous review. I have resubmitted a revision regarding the formatting. PTAL.

@tohtana tohtana added this pull request to the merge queue Jun 24, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jun 24, 2024
@loadams loadams added this pull request to the merge queue Jun 26, 2024
Merged via the queue into microsoft:master with commit 224a05c Jun 26, 2024
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants