Skip to content

Extra loss terms before loss.backward() seem to have no effects #249

Open
@kenziyuliu

Description

@kenziyuliu

🐛 Bug

Extra loss terms before loss.backward() do not seem to have effects when privacy_engine is in use. One use case this would be blocking is when we want to regularize model weights towards another set of weights (e.g. multi-task learning regularization), or other weight-based regularization techniques.

Please reproduce using our template Colab and post here the link

https://colab.research.google.com/drive/1TyZMh4IgkB8qTak1JqYpBFMrrE_x1Rbp?usp=sharing

  • 1st code cell: added an extra loss term based weights (l2 loss)
  • last 2 code cells: train models with and without privacy_engine respectively

To Reproduce

  1. Run all cells in the notebook
  2. With privacy_engine attached, I would expect the extra loss term (1st code cell) to have an effect on model learning
  3. If we look at the output of the last two cells, it seems that when privacy_engine is enabled, the extra loss term is not taken into account

Expected behavior

When we add loss terms before backprop, e.g.,

loss = criterion(y_pred, y_true)
loss += l2_loss(model)
loss += proximal_loss(model, another_model)   # e.g. encourage two models to have similar weights
loss.backward()

the extra loss would reflect into training. However, when we use privacy_engine the extra loss terms seem to have no effect, and this is unexpected since we only clip and noise gradients corresponding to the training examples.

Environment

The issue should be reproducible in the provided colab notebook

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions