Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix per-layer clipping in distributed #347

Conversation

alexandresablayrolles
Copy link
Contributor

@alexandresablayrolles alexandresablayrolles commented Feb 4, 2022

Problem

DDPPerLayerOptimizer was buggy and our test did not catch that.

Changes

Per-layer clipping uses the regular pytorch DDP (Distributed Data Parallel) instead of our custom DPDDP (Differentially Private DDP). Our PrivacyEngine only checked for DPDDP and not for DDP.
I fixed PrivacyEngine to have distributed=True if wrapper is DDP instead of DPDDP.

This made the test non-buggy and I modified DPDDP to pass the test (adding _check_skip_next_step on step, adding if self._step_skip_queue and self._step_skip_queue[0] when doing per-parameter noise)

@facebook-github-bot facebook-github-bot added CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported labels Feb 4, 2022
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34002861

Copy link
Contributor

@romovpa romovpa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some comments.
Also, @alexandresablayrolles may I ask you to describe in a few words the problem and the proposed solution? I didn't work on DDP and it's hard for me to validate the logic.

@@ -327,7 +328,7 @@ def make_private(
if noise_generator and self.secure_mode:
raise ValueError("Passing seed is prohibited in secure mode")

distributed = type(module) is DPDDP
distributed = type(module) is DPDDP or type(module) is DDP
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, we should avoid type comparisons like this. Do this instead:

isinstance(module, (DPDDP, DDP))

Comment on lines 57 to 59
print(
f"Running {clipping}-clipping DDP {withdp} differential privacy example on rank {rank}."
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't use prints anywhere in the library, in tests too. It may have unexpected side effects.
If we really needs to do this output here, we should use logging:

import logging
logging.info(f"Running {clipping}-clipping DDP {withdp} differential privacy example on rank {rank}.")

@@ -77,7 +80,7 @@ def demo_basic(rank, weight, world_size, dp, clipping):
if dp:
max_grad_norm = 1e8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's follow pep-8 and name this constant with capital letters: MAX_GRAD_NORM. This will emphasize that this thing is pre-defined and hard coded.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not a constant: if clipping is per-layer, it becomes a list, see line 80.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34002861

alexandresablayrolles pushed a commit to alexandresablayrolles/opacus that referenced this pull request Feb 11, 2022
Summary:
Pull Request resolved: pytorch#347

PrivacyEngine now recognizes DDP as well (in the case of per layer clipping)
DDPPerLayerOptimizer is updated with pre_step

Reviewed By: pierrestock

Differential Revision: D34002861

fbshipit-source-id: e968c7c2890677e41a622373e956ae49f787a75d
Summary:
Pull Request resolved: pytorch#347

PrivacyEngine now recognizes DDP as well (in the case of per layer clipping)
DDPPerLayerOptimizer is updated with pre_step

Reviewed By: pierrestock

Differential Revision: D34002861

fbshipit-source-id: 467e9151b2018d68f35dcaf801dffc48d2c88297
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34002861

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

DistributedPerLayerOptimizer does not work on v1.0.1
3 participants