Skip to content

TE convert model with deferred initialization #3646

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

mayukh-stackav
Copy link

@mayukh-stackav mayukh-stackav commented Jun 20, 2025

This PR adds a memory efficient way of converting models with Transformer Engine via lazy weight initialization. Transformer Engine added Deferred Initialization here (NVIDIA/TransformerEngine#596). Pulling this into convert_model function. Loading large models directly to memory results in OOMs especially in FSDP trainings workflows. This avoids initialization of models before being passed into an FSDP wrapper.

Review

@mayukh-stackav mayukh-stackav force-pushed the transformer-engine-meta-device-loading branch from 753277a to c3bfab1 Compare June 23, 2025 13:16
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant