Skip to content

Add Conditional Gradient Optimizer  #468

@pkan2

Description

@pkan2

System information

  • TensorFlow version (you are using): 2.0.0-beta1
  • TensorFlow Addons version:
  • Is it in the tf.contrib (if so, where): No
  • Are you willing to contribute it (yes/no): Yes
  • Are you willing to maintain it going forward? (yes/no): Yes

Describe the feature and the current behavior/state.
The implementation of this optimizer is based on the following paper:
https://arxiv.org/pdf/1803.06453.pdf
Current implementation enforces Frobenius norm constraints on a model.
The variable update rule being implemented is:
variable -= (1-learning_rate) * (variable + lamda * gradient / frobenius_norm(gradient))
where, learning_rate and lamda are the parameters that are needed to input into the model when initializing the model.

Will this change the current api? How?
It will not change the current api. It is implemented based on the abstract interface of tf.keras.optimizers.Optimizer.

Who will benefit with this feature?
We provide an API for an optimizer which can enforce hard constraints on neural networks. It is based on conditional gradient descent algorithm. The community primarily benefiting from this feature would be machine learning researchers and scientists.

Any Other info.
co-contributor – Vishnu Lokhande

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions