Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add new code about FedRep and Simple_tuning #564

Merged
merged 12 commits into from
Apr 3, 2023

Conversation

Alan-Qin
Copy link
Contributor

No description provided.

@Osier-Yi Osier-Yi requested review from yxdyc and Osier-Yi March 31, 2023 08:55
Copy link
Collaborator

@Osier-Yi Osier-Yi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add the unitests for FedRep & simple finetunning seperately.

logger = logging.getLogger(__name__)


def wrap_FedRepTrainer(
base_trainer: Type[GeneralTorchTrainer]) -> Type[GeneralTorchTrainer]:
"""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why delete this explanation?

@@ -63,6 +63,13 @@ def extend_fl_algo_cfg(cfg):
cfg.personalization.K = 5 # the local approximation steps for pFedMe
cfg.personalization.beta = 1.0 # the average moving parameter for pFedMe

# parameters for FedRep:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add explanations on those new configs; Please also the explanation of those new configs to the file FederatedScope/federatedscope/core/configs/README.md

stdv = 1. / math.sqrt(param.size(-1))
param.data.uniform_(-stdv, stdv)
else:
param.data.uniform_(-stdv, stdv)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This else branch implicitly assumes that "weight" para is the first to appear, otherwise stdv will be used before it is assigned a value.
Therefore, it is suggested to add some assertions here or change it to a more robust way of coding to pre-compute stdv


def wrap_Simple_tuning_Trainer(
base_trainer: Type[GeneralTorchTrainer]) -> Type[GeneralTorchTrainer]:

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add some explanations for the purpose of this wrapper function, and the meaning of Simple_tuning as it is too general.

def wrap_FedRepTrainer(
base_trainer: Type[GeneralTorchTrainer]) -> Type[GeneralTorchTrainer]:

init_FedRep_ctx(base_trainer)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Plz add some comments to remind the users that this method should be used in conjunction with cfg.personalization.local_param and cfg.finetune.local_param. Also the basic idea is should be introduced, e.g., which is to

train cfg.personalization.local_param parameter in the first cfg.personalization.epoch_linear rounds, and then train the other remaining parameters in the last cfg.personalization.epoch_feature rounds.

Osier-Yi
Osier-Yi previously approved these changes Apr 3, 2023
fix the import problem
Copy link
Collaborator

@Osier-Yi Osier-Yi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Osier-Yi Osier-Yi merged commit dfe6393 into alibaba:master Apr 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants