Skip to content

[REQUEST] torch equivalent api model.no_sync() #1902

Open

Description

Hi, I'm adding deepspeed for my distributed model training framework.

When using pytorch native apis, everything is fine. For distributed training, originally I would wrapp model in an object of nn.parallel.DistributedDataParallel, and use model.no_sync() api to avoid unnecessary sync ops.

I cannot find the equivalent api in deepspeed. Can you offer some help?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions