Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add scripts for running gcn with dp & fix tensor under different device #313

Merged
merged 3 commits into from
Aug 16, 2022

Conversation

rayrayraykk
Copy link
Collaborator

As the title says.

@rayrayraykk rayrayraykk added bug Something isn't working FedHPO FedHPO related labels Aug 11, 2022
@joneswong
Copy link
Collaborator

@DavdGao hi dawei, we are unfamiliar with this algorithm. Could you give an explanation for us about what is the (\epsilon, \delta) each single query (training round) satisfies, when we specify constant=1 and eps in {50, 500, 5000}? Thanks!

joneswong
joneswong previously approved these changes Aug 11, 2022
Copy link
Collaborator

@joneswong joneswong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shell script looks good.

@DavdGao
Copy link
Collaborator

DavdGao commented Aug 12, 2022

@DavdGao hi dawei, we are unfamiliar with this algorithm. Could you give an explanation for us about what is the (\epsilon, \delta) each single query (training round) satisfies, when we specify constant=1 and eps in {50, 500, 5000}? Thanks!

The $\epsilon-\delta$-DP guarantee promises given two neighbor dataset $D$ and $D'$, $P(M(D)\in S))\leq \exp{(\epsilon)} P(M(D)\in S) + \delta$, where $M$ is the training process in our setting.
In NbAFL, the $\epsilon$ can be specified by the user, while $\delta$ is modified by the constant $c$.

Copy link
Collaborator

@DavdGao DavdGao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please see the inline comments

benchmark/FedHPOB/scripts/gcn/cora_dp.yaml Show resolved Hide resolved
benchmark/FedHPOB/scripts/gcn/cora_dp.yaml Outdated Show resolved Hide resolved
@rayrayraykk
Copy link
Collaborator Author

please see the inline comments

Updated accordingly.

Copy link
Collaborator

@DavdGao DavdGao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -63,8 +63,12 @@ def init_nbafl_ctx(base_trainer):
ctx.regularizer = get_regularizer(cfg.regularizer.type)

# set noise scale during upload
if cfg.trainer.type == 'nodefullbatch_trainer':
num_train_data = sum(ctx.train_loader.dataset[0]['train_mask'])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should wrap all the dataset/dataloader with a unified attribute num_samples in the future.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree.

@rayrayraykk rayrayraykk merged commit 86f3268 into alibaba:master Aug 16, 2022
Schichael pushed a commit to Schichael/FederatedScope_thesis that referenced this pull request Sep 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working FedHPO FedHPO related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants