Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] Fix Atari learning test regressions (2 bugs) and 1 minor attention net bug. #18306

Merged
merged 12 commits into from
Sep 3, 2021

Conversation

sven1977
Copy link
Contributor

@sven1977 sven1977 commented Sep 2, 2021

This PR fixes:

  • built-in VisionNets: The value branch - in case we have a shared one - is attached to the action-logits output instead of the feature output (one layer before the action logits output).
  • config.num_framestacks = "auto" would still use the old Atari framestacking logic. Old framestack=True soft-deprecated.
  • SampleBatch.get_single_step_input_dict had a bug affecting batch_repeat_value>1 view requirements (attention nets!). A test on examples/attention_net.py confirmed that the fix makes learning considerably faster:
W/o fix (before this PR):
+----------------------------------+------------+-------+--------+------------------+--------+----------+----------------------+----------------------+--------------------+
| Trial name                       | status     | loc   |   iter |   total time (s) |     ts |   reward |   episode_reward_max |   episode_reward_min |   episode_len_mean |
|----------------------------------+------------+-------+--------+------------------+--------+----------+----------------------+----------------------+--------------------|
| PPO_RepeatAfterMeEnv_596c6_00000 | TERMINATED |       |     27 |         120.673  | 108000 |    81.44 |                   95 |                   69 |                 99 |
| PPO_RepeatAfterMeEnv_596c6_00001 | TERMINATED |       |     17 |          75.8876 |  68000 |    80.82 |                   93 |                   59 |                 99 |
| PPO_RepeatAfterMeEnv_596c6_00002 | TERMINATED |       |     16 |          71.3952 |  64000 |    81.46 |                   97 |                   69 |                 99 |
| PPO_RepeatAfterMeEnv_596c6_00003 | TERMINATED |       |     20 |          89.8036 |  80000 |    82.14 |                   97 |                   63 |                 99 |
| PPO_RepeatAfterMeEnv_596c6_00004 | TERMINATED |       |     20 |          89.413  |  80000 |    82.26 |                   95 |                   63 |                 99 |
| PPO_RepeatAfterMeEnv_596c6_00005 | TERMINATED |       |     15 |          64.1815 |  60000 |    81.6  |                   95 |                   67 |                 99 |
| PPO_RepeatAfterMeEnv_596c6_00006 | TERMINATED |       |     19 |          77.9282 |  76000 |    81.94 |                   93 |                   67 |                 99 |
| PPO_RepeatAfterMeEnv_596c6_00007 | TERMINATED |       |     17 |          67.5425 |  68000 |    80.26 |                   93 |                   57 |                 99 |
+----------------------------------+------------+-------+--------+------------------+--------+----------+----------------------+----------------------+--------------------+

W/ fix (after this PR). Note the reduced iters and ts necessary to reach +80.0 reward!
+----------------------------------+------------+-------+--------+------------------+-------+----------+----------------------+----------------------+--------------------+
| Trial name                       | status     | loc   |   iter |   total time (s) |    ts |   reward |   episode_reward_max |   episode_reward_min |   episode_len_mean |
|----------------------------------+------------+-------+--------+------------------+-------+----------+----------------------+----------------------+--------------------|
| PPO_RepeatAfterMeEnv_1a701_00000 | TERMINATED |       |     19 |          86.3746 | 76000 |    81    |                   93 |                   67 |                 99 |
| PPO_RepeatAfterMeEnv_1a701_00001 | TERMINATED |       |     14 |          62.1386 | 56000 |    81.82 |                   99 |                   65 |                 99 |
| PPO_RepeatAfterMeEnv_1a701_00002 | TERMINATED |       |     14 |          61.6121 | 56000 |    84.68 |                   97 |                   67 |                 99 |
| PPO_RepeatAfterMeEnv_1a701_00003 | TERMINATED |       |     13 |          57.6263 | 52000 |    80    |                   91 |                   61 |                 99 |
| PPO_RepeatAfterMeEnv_1a701_00004 | TERMINATED |       |     18 |          82.038  | 72000 |    80.2  |                   93 |                   69 |                 99 |
| PPO_RepeatAfterMeEnv_1a701_00005 | TERMINATED |       |     18 |          75.7248 | 72000 |    82.34 |                   93 |                   65 |                 99 |
| PPO_RepeatAfterMeEnv_1a701_00006 | TERMINATED |       |     17 |          70.7106 | 68000 |    81.28 |                   91 |                   69 |                 99 |
| PPO_RepeatAfterMeEnv_1a701_00007 | TERMINATED |       |     18 |          74.9671 | 72000 |    81.7  |                   97 |                   67 |                 99 |
+----------------------------------+------------+-------+--------+------------------+-------+----------+----------------------+----------------------+--------------------+

Why are these changes needed?

Related issue number

Checks

  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

if to_ == 0:
to_ = None
input_dict[view_col] = np.array([
np.concatenate(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

data has to be last in the concat, otherwise, e.g. an attention net will not necessarily see the most recent observations. This explains the learning enhancements on the RepeatAfterMe experiments vs older versions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense.

@@ -486,15 +487,14 @@ def wrap(env):
clip_rewards = True

# Deprecated way of framestacking is used.
framestack = model_config.get("framestack") is True
use_old_framestack = model_config.get("framestack") is True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anyway to say this is deprecated in the logs?

@sven1977 sven1977 merged commit 9a8ca6a into ray-project:master Sep 3, 2021
@sven1977 sven1977 deleted the fix_atari_learning_regressions branch June 2, 2023 20:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants