-
Couldn't load subscription status.
- Fork 6.8k
[rllib] Support batch norm layers #3369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Test FAILed. |
|
Test FAILed. |
|
Test FAILed. |
|
Test FAILed. |
|
Ping @richardliaw |
|
Test FAILed. |
| if self._prev_reward_input is not None and prev_reward_batch: | ||
| builder.add_feed_dict({self._prev_reward_input: prev_reward_batch}) | ||
| builder.add_feed_dict({self._is_training: is_training}) | ||
| builder.add_feed_dict({self._is_training: False}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where in the code is _is_training True?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| builder.add_feed_dict({self._is_training: True}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few questions
Co-Authored-By: ericl <ekhliang@gmail.com>
|
Test FAILed. |
|
Test FAILed. |
What do these changes do?
is_trainingtensor tobuild_layers_v2test_batch_norm.pyI don't think this will work for e.g., A3C which applies gradient updates separately, but it should work fine in the other execution modes.
Related issue number
Closes: #2023