Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dump out external trainable variable for external optimizer #817

Merged
merged 11 commits into from
Oct 19, 2022

Conversation

tsarikahin
Copy link
Contributor

closing #805 #395

@tsarikahin
Copy link
Contributor Author

This merge request includes the following changes:

  • If an external optimizer function (e.g. L-BFGS) is used, it was not outputting or writing the external trainable variables after each epoch or in the training end. In the training end part was forgotten to implement. Now one can output the external trainable variables after each epoch and at the very end as well
  • One example is tested if the algorithm works as expected

@lululxvi
Copy link
Owner

Format your code via black

@tsarikahin
Copy link
Contributor Author

@lululxvi I could not get why the tests could not pass? Is it because of formatting?

@lululxvi
Copy link
Owner

@lululxvi I could not get why the tests could not pass? Is it because of formatting?

No. It seems an issue of TensorFlow installation in the Travis system.

examples/pinn_inverse/Lorenz_inverse.py Outdated Show resolved Hide resolved
deepxde/callbacks.py Show resolved Hide resolved
deepxde/model.py Outdated Show resolved Hide resolved
deepxde/model.py Outdated Show resolved Hide resolved
deepxde/callbacks.py Outdated Show resolved Hide resolved
deepxde/model.py Outdated Show resolved Hide resolved
@tsarikahin
Copy link
Contributor Author

tsarikahin commented Oct 17, 2022

@lululxvi I could not comment on the file but for line 562 in model.py I have a comment.

.
.
.
self.train_state.set_data_train(*self.data.train_next_batch(self.batch_size))
self.train_state.set_data_test(*self.data.test())
self._test()
self.callbacks.on_train_begin() # Line 562 
.
.
.

Line 562 also repeats the last epoch of the previous optimizer if more than one optimizer is used. I think it is not considered when it is implemented and it should be skipped if at least two optimizers are used.

Actually, line 562 is not needed at all. Because it is the initial guess before optimization starts. Do we really need it?

@lululxvi
Copy link
Owner

Line 562 also repeats the last epoch of the previous optimizer if more than one optimizer is used. I think it is not considered when it is implemented and it should be skipped if at least two optimizers are used.

Actually, line 562 is not needed at all. Because it is the initial guess before optimization starts. Do we really need it?

Good point. I suggest keeping it, as the initial values are useful and we may tune the initial values in some cases. It is a future work to resolve this repeated output between two optimizers.

deepxde/model.py Outdated Show resolved Hide resolved
deepxde/model.py Outdated Show resolved Hide resolved
deepxde/model.py Outdated Show resolved Hide resolved
deepxde/model.py Outdated Show resolved Hide resolved
deepxde/model.py Outdated Show resolved Hide resolved
@tsarikahin
Copy link
Contributor Author

tsarikahin commented Oct 18, 2022

I have another remark regarding on outputting of the loss values in case of an external optimizer is used. Fetches contain only the training loss so it does not include the test loss. I think we should fetch the test losses as well right? It was like this before, but I just realized it while plotting losses

deepxde/model.py Outdated Show resolved Hide resolved
deepxde/model.py Outdated Show resolved Hide resolved
@lululxvi
Copy link
Owner

I have another remark regarding on outputting of the loss values in case of an external optimizer is used. Fetches contain only the training loss so it does not include the test loss. I think we should fetch the test losses as well right? It was like this before, but I just realized it while plotting losses

Yes, you are right.

@tsarikahin
Copy link
Contributor Author

I have another remark regarding on outputting of the loss values in case of an external optimizer is used. Fetches contain only the training loss so it does not include the test loss. I think we should fetch the test losses as well right? It was like this before, but I just realized it while plotting losses

Yes, you are right.

Done.

deepxde/model.py Outdated Show resolved Hide resolved
@lululxvi lululxvi merged commit 7ddaf6f into lululxvi:master Oct 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants