Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DeepSpeed MoE #1310

Merged
merged 22 commits into from
Aug 17, 2021
Merged

DeepSpeed MoE #1310

merged 22 commits into from
Aug 17, 2021

Conversation

awan-10
Copy link
Contributor

@awan-10 awan-10 commented Aug 17, 2021

This PR introduces DeepSpeed Mixture of Experts (MoE) support. The code has been written in collaboration with many contributors at Microsoft including the Z-code team.

jeffra and others added 20 commits August 16, 2021 19:58
Co-authored-by: Alex Muzio <Alex.Muzio@microsoft.com>
Co-authored-by: Alex Muzio <alferre@microsoft.com>
Co-authored-by: Ammar Ahmad Awan <ammar.awan@microsoft.com>
Co-authored-by: Conglong Li <conglong.li@gmail.com>
Co-authored-by: Felipe Cruz Salinas <Andres.Cruz@microsoft.com>
Co-authored-by: Jeff Rasley <jerasley@microsoft.com>
Co-authored-by: Reza Yazdani <44502768+RezaYazdaniAminabadi@users.noreply.github.com>
Co-authored-by: Reza Yazdani <reyazda@microsoft.com>
Co-authored-by: Samyam Rajbhandari <samyamr@microsoft.com>
Co-authored-by: Shaden Smith <shaden.smith@microsoft.com>
Co-authored-by: Young Jin Kim <youki@microsoft.com>
Co-authored-by: alexandremuzio <ax.muzio@gmail.com>
Co-authored-by: bapatra <bapatra@microsoft.com>
@awan-10 awan-10 enabled auto-merge (squash) August 17, 2021 05:00
@awan-10 awan-10 merged commit f284324 into master Aug 17, 2021
@awan-10 awan-10 deleted the staging-moe-zero-v3 branch September 15, 2021 19:10
@@ -23,7 +23,8 @@ void Adam_Optimizer::Step(float* _params,
float* _exp_avg,
float* _exp_avg_sq,
size_t _param_size,
__half* dev_params)
__half* dev_params,
bool half_precision)
Copy link
Contributor

@tjruwase tjruwase May 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@awan-10, @jeffra, @RezaYazdaniAminabadi, sorry i realize this is almost 3 years old, but I need to understand the introduction of half_precision in this PR. Who can I talk to you? Thanks!

For context, this affects a current PR under review
#5409

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @tjruwase

I can chat with you on this.
I think this is mostly added here to make sure the right AVX operation is selected for FP32 vs FP16. However, as i see it is now templated in this new PR.
Thanks,
Reza

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RezaYazdaniAminabadi, thanks for the response. Your explanation was my guess as well. I think the entire code can be greatly simplified with the template usage and also improved type support in torch. Can you please help to review the new PR and also engage in the conversation there?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants