Skip to content

[Tune] status tables during training aren't showing values for tunable, nested config parameters #9307

@deanwampler

Description

@deanwampler

What is the problem?

Ray 0.8.6. Python 3.7.7.

Running this script (actually run with ipython):

from ray import tune
ray.init()
tune.run(
    "PPO",
    stop={"episode_reward_mean": 400},
    config={
        "env": "CartPole-v1",
        "num_gpus": 0,
        "num_workers": 1,
        # "lr": tune.grid_search([0.01, 0.001, 0.0001]),
        "model": {
            'fcnet_hiddens': [tune.grid_search([20, 40, 60, 80]), tune.grid_search([20, 40, 60, 80])]
        },
        "eager": False,
    },
)

It appears that Tune easily drives the 16 different combinations for the two elements of fcnet_hidden (number of weights for two NN layers). All combinations easily reach the reward of 400, but some do so with lower iterations and/or elapsed times, which is what I want to determine.

The bug is in the status table printed. It has two columns for the tuning parameters, model/fcnet_hiddens/0 and model/fcnet_hiddens/1, but they are empty. I can determine the actual values by looking at the experiment_tag, e.g., experiment_tag: 8_fcnet_hiddens_0=20,fcnet_hiddens_1=60

Metadata

Metadata

Assignees

Labels

bugSomething that is supposed to be working; but isn'ttriageNeeds triage (eg: priority, bug/not-bug, and owning component)tuneTune-related issues

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions