-
Notifications
You must be signed in to change notification settings - Fork 767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dump out external trainable variable for external optimizer #817
Conversation
This merge request includes the following changes:
|
Format your code via black |
@lululxvi I could not get why the tests could not pass? Is it because of formatting? |
No. It seems an issue of TensorFlow installation in the Travis system. |
@lululxvi I could not comment on the file but for line 562 in model.py I have a comment. .
.
.
self.train_state.set_data_train(*self.data.train_next_batch(self.batch_size))
self.train_state.set_data_test(*self.data.test())
self._test()
self.callbacks.on_train_begin() # Line 562
.
.
. Line 562 also repeats the last epoch of the previous optimizer if more than one optimizer is used. I think it is not considered when it is implemented and it should be skipped if at least two optimizers are used. Actually, line 562 is not needed at all. Because it is the initial guess before optimization starts. Do we really need it? |
Good point. I suggest keeping it, as the initial values are useful and we may tune the initial values in some cases. It is a future work to resolve this repeated output between two optimizers. |
I have another remark regarding on outputting of the loss values in case of an external optimizer is used. Fetches contain only the training loss so it does not include the test loss. I think we should fetch the test losses as well right? It was like this before, but I just realized it while plotting losses |
Yes, you are right. |
Done. |
closing #805 #395