-
Notifications
You must be signed in to change notification settings - Fork 219
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add new code about FedRep and Simple_tuning #564
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add the unitests for FedRep & simple finetunning seperately.
logger = logging.getLogger(__name__) | ||
|
||
|
||
def wrap_FedRepTrainer( | ||
base_trainer: Type[GeneralTorchTrainer]) -> Type[GeneralTorchTrainer]: | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why delete this explanation?
@@ -63,6 +63,13 @@ def extend_fl_algo_cfg(cfg): | |||
cfg.personalization.K = 5 # the local approximation steps for pFedMe | |||
cfg.personalization.beta = 1.0 # the average moving parameter for pFedMe | |||
|
|||
# parameters for FedRep: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add explanations on those new configs; Please also the explanation of those new configs to the file FederatedScope/federatedscope/core/configs/README.md
stdv = 1. / math.sqrt(param.size(-1)) | ||
param.data.uniform_(-stdv, stdv) | ||
else: | ||
param.data.uniform_(-stdv, stdv) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This else branch implicitly assumes that "weight" para is the first to appear, otherwise stdv
will be used before it is assigned a value.
Therefore, it is suggested to add some assertions here or change it to a more robust way of coding to pre-compute stdv
|
||
def wrap_Simple_tuning_Trainer( | ||
base_trainer: Type[GeneralTorchTrainer]) -> Type[GeneralTorchTrainer]: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add some explanations for the purpose of this wrapper function, and the meaning of Simple_tuning
as it is too general.
def wrap_FedRepTrainer( | ||
base_trainer: Type[GeneralTorchTrainer]) -> Type[GeneralTorchTrainer]: | ||
|
||
init_FedRep_ctx(base_trainer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Plz add some comments to remind the users that this method should be used in conjunction with cfg.personalization.local_param
and cfg.finetune.local_param
. Also the basic idea is should be introduced, e.g., which is to
train
cfg.personalization.local_param
parameter in the firstcfg.personalization.epoch_linear
rounds, and then train the other remaining parameters in the lastcfg.personalization.epoch_feature
rounds.
format change - (rerun-unitest)
fix the import problem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
No description provided.