Skip to content

Rename Generalization -> Environment Parameter Randomization #3646

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Mar 18, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions com.unity.ml-agents/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.

### Minor Changes
- Format of console output has changed slightly and now matches the name of the model/summary directory. (#3630, #3616)
- Renamed 'Generalization' feature to 'Environment Parameter Randomization'.

## [0.15.0-preview] - 2020-03-18
### Major Changes
Expand Down
File renamed without changes.
9 changes: 4 additions & 5 deletions docs/ML-Agents-Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -350,12 +350,11 @@ training process.
learn more about adding visual observations to an agent
[here](Learning-Environment-Design-Agents.md#multiple-visual-observations).

- **Training with Reset Parameter Sampling** - To train agents to be adapt
to changes in its environment (i.e., generalization), the agent should be exposed
to several variations of the environment. Similar to Curriculum Learning,
- **Training with Environment Parameter Randomization** - If an agent is exposed to several variations of an environment, it will be more robust (i.e. generalize better) to
unseen variations of the environment. Similar to Curriculum Learning,
where environments become more difficult as the agent learns, the toolkit provides
a way to randomly sample Reset Parameters of the environment during training. See
[Training Generalized Reinforcement Learning Agents](Training-Generalized-Reinforcement-Learning-Agents.md)
a way to randomly sample parameters of the environment during training. See
[Training With Environment Parameter Randomization](Training-Environment-Parameter-Randomization.md)
to learn more about this feature.

- **Cloud Training on AWS** - To facilitate using the ML-Agents toolkit on
Expand Down
2 changes: 1 addition & 1 deletion docs/Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
* [Training with Curriculum Learning](Training-Curriculum-Learning.md)
* [Training with Imitation Learning](Training-Imitation-Learning.md)
* [Training with LSTM](Feature-Memory.md)
* [Training Generalized Reinforcement Learning Agents](Training-Generalized-Reinforcement-Learning-Agents.md)
* [Training with Environment Parameter Randomization](Training-Environment-Parameter-Randomization.md)

## Inference

Expand Down
4 changes: 2 additions & 2 deletions docs/Training-Curriculum-Learning.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,10 +93,10 @@ behavior has the following parameters:
measure by previous values.
* If `true`, weighting will be 0.75 (new) 0.25 (old).
* `parameters` (dictionary of key:string, value:float array) - Corresponds to
Academy reset parameters to control. Length of each array should be one
Environment parameters to control. Length of each array should be one
greater than number of thresholds.

Once our curriculum is defined, we have to use the reset parameters we defined
Once our curriculum is defined, we have to use the environment parameters we defined
and modify the environment from the Agent's `OnEpisodeBegin()` function. See
[WallJumpAgent.cs](https://github.com/Unity-Technologies/ml-agents/blob/master/Project/Assets/ML-Agents/Examples/WallJump/Scripts/WallJumpAgent.cs)
for an example.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,49 +1,48 @@
# Training Generalized Reinforcement Learning Agents
# Training With Environment Parameter Randomization

One of the challenges of training and testing agents on the same
environment is that the agents tend to overfit. The result is that the
agents are unable to generalize to any tweaks or variations in the environment.
This is analogous to a model being trained and tested on an identical dataset
in supervised learning. This becomes problematic in cases where environments
are randomly instantiated with varying objects or properties.
are instantiated with varying objects or properties.

To make agents robust and generalizable to different environments, the agent
should be trained over multiple variations of the environment. Using this approach
for training, the agent will be better suited to adapt (with higher performance)
to future unseen variations of the environment
To help agents robust and better generalizable to changes in the environment, the agent
can be trained over multiple variations of a given environment. We refer to this approach as **Environment Parameter Randomization**. For those familiar with Reinforcement Learning research, this approach is based on the concept of Domain Randomization (you can read more about it [here](https://arxiv.org/abs/1703.06907)). By using parameter randomization
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

during training, the agent can be better suited to adapt (with higher performance)
to future unseen variations of the environment.

_Example of variations of the 3D Ball environment._

Ball scale of 0.5 | Ball scale of 4
:-------------------------:|:-------------------------:
![](images/3dball_small.png) | ![](images/3dball_big.png)

## Introducing Generalization Using Reset Parameters

To enable variations in the environments, we implemented `Reset Parameters`.
`Reset Parameters` are `Academy.Instance.FloatProperties` that are used only when
resetting the environment. We
To enable variations in the environments, we implemented `Environment Parameters`.
`Environment Parameters` are `Academy.Instance.FloatProperties` that can be read when setting
up the environment. We
also included different sampling methods and the ability to create new kinds of
sampling methods for each `Reset Parameter`. In the 3D ball environment example displayed
in the figure above, the reset parameters are `gravity`, `ball_mass` and `ball_scale`.
sampling methods for each `Environment Parameter`. In the 3D ball environment example displayed
in the figure above, the environment parameters are `gravity`, `ball_mass` and `ball_scale`.


## How to Enable Generalization Using Reset Parameters
## How to Enable Environment Parameter Randomization

We first need to provide a way to modify the environment by supplying a set of `Reset Parameters`
We first need to provide a way to modify the environment by supplying a set of `Environment Parameters`
and vary them over time. This provision can be done either deterministically or randomly.

This is done by assigning each `Reset Parameter` a `sampler-type`(such as a uniform sampler),
which determines how to sample a `Reset
This is done by assigning each `Environment Parameter` a `sampler-type`(such as a uniform sampler),
which determines how to sample an `Environment
Parameter`. If a `sampler-type` isn't provided for a
`Reset Parameter`, the parameter maintains the default value throughout the
training procedure, remaining unchanged. The samplers for all the `Reset Parameters`
`Environment Parameter`, the parameter maintains the default value throughout the
training procedure, remaining unchanged. The samplers for all the `Environment Parameters`
are handled by a **Sampler Manager**, which also handles the generation of new
values for the reset parameters when needed.
values for the environment parameters when needed.

To setup the Sampler Manager, we create a YAML file that specifies how we wish to
generate new samples for each `Reset Parameters`. In this file, we specify the samplers and the
`resampling-interval` (the number of simulation steps after which reset parameters are
generate new samples for each `Environment Parameters`. In this file, we specify the samplers and the
`resampling-interval` (the number of simulation steps after which environment parameters are
resampled). Below is an example of a sampler file for the 3D ball environment.

```yaml
Expand All @@ -69,26 +68,25 @@ Below is the explanation of the fields in the above example.

* `resampling-interval` - Specifies the number of steps for the agent to
train under a particular environment configuration before resetting the
environment with a new sample of `Reset Parameters`.
environment with a new sample of `Environment Parameters`.

* `Reset Parameter` - Name of the `Reset Parameter` like `mass`, `gravity` and `scale`. This should match the name
specified in the academy of the intended environment for which the agent is
being trained. If a parameter specified in the file doesn't exist in the
environment, then this parameter will be ignored. Within each `Reset Parameter`
* `Environment Parameter` - Name of the `Environment Parameter` like `mass`, `gravity` and `scale`. This should match the name
specified in the `FloatProperties` of the environment being trained. If a parameter specified in the file doesn't exist in the
environment, then this parameter will be ignored. Within each `Environment Parameter`

* `sampler-type` - Specify the sampler type to use for the `Reset Parameter`.
* `sampler-type` - Specify the sampler type to use for the `Environment Parameter`.
This is a string that should exist in the `Sampler Factory` (explained
below).

* `sampler-type-sub-arguments` - Specify the sub-arguments depending on the `sampler-type`.
In the example above, this would correspond to the `intervals`
under the `sampler-type` `"multirange_uniform"` for the `Reset Parameter` called `gravity`.
under the `sampler-type` `"multirange_uniform"` for the `Environment Parameter` called `gravity`.
The key name should match the name of the corresponding argument in the sampler definition.
(See below)

The Sampler Manager allocates a sampler type for each `Reset Parameter` by using the *Sampler Factory*,
The Sampler Manager allocates a sampler type for each `Environment Parameter` by using the *Sampler Factory*,
which maintains a dictionary mapping of string keys to sampler objects. The available sampler types
to be used for each `Reset Parameter` is available in the Sampler Factory.
to be used for each `Environment Parameter` is available in the Sampler Factory.

### Included Sampler Types

Expand Down Expand Up @@ -134,7 +132,7 @@ is as follows:
`SamplerFactory.register_sampler(*custom_sampler_string_key*, *custom_sampler_object*)`

Once the Sampler Factory reflects the new register, the new sampler type can be used for sample any
`Reset Parameter`. For example, lets say a new sampler type was implemented as below and we register
`Environment Parameter`. For example, lets say a new sampler type was implemented as below and we register
the `CustomSampler` class with the string `custom-sampler` in the Sampler Factory.

```python
Expand All @@ -148,7 +146,7 @@ class CustomSampler(Sampler):
```

Now we need to specify the new sampler type in the sampler YAML file. For example, we use this new
sampler type for the `Reset Parameter` *mass*.
sampler type for the `Environment Parameter` *mass*.

```yaml
mass:
Expand All @@ -158,16 +156,16 @@ mass:
argC: 3
```

### Training with Generalization Using Reset Parameters
### Training with Environment Parameter Randomization

After the sampler YAML file is defined, we proceed by launching `mlagents-learn` and specify
our configured sampler file with the `--sampler` flag. For example, if we wanted to train the
3D ball agent with generalization using `Reset Parameters` with `config/3dball_generalize.yaml`
3D ball agent with parameter randomization using `Environment Parameters` with `config/3dball_randomize.yaml`
sampling setup, we would run

```sh
mlagents-learn config/trainer_config.yaml --sampler=config/3dball_generalize.yaml
--run-id=3D-Ball-generalization --train
mlagents-learn config/trainer_config.yaml --sampler=config/3dball_randomize.yaml
--run-id=3D-Ball-randomize --train
```

We can observe progress and metrics via Tensorboard.
5 changes: 2 additions & 3 deletions docs/Training-ML-Agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,8 +106,7 @@ environment, you can set the following command line options when invoking
lessons for curriculum training. See [Curriculum
Training](Training-Curriculum-Learning.md) for more information.
* `--sampler=<file>`: Specify a sampler YAML file for defining the
sampler for generalization training. See [Generalization
Training](Training-Generalized-Reinforcement-Learning-Agents.md) for more information.
sampler for parameter randomization. See [Environment Parameter Randomization](Training-Environment-Parameter-Randomization.md) for more information.
* `--keep-checkpoints=<n>`: Specify the maximum number of model checkpoints to
keep. Checkpoints are saved after the number of steps specified by the
`save-freq` option. Once the maximum number of checkpoints has been reached,
Expand Down Expand Up @@ -218,7 +217,7 @@ are conducting, see:
* [Using Recurrent Neural Networks](Feature-Memory.md)
* [Training with Curriculum Learning](Training-Curriculum-Learning.md)
* [Training with Imitation Learning](Training-Imitation-Learning.md)
* [Training Generalized Reinforcement Learning Agents](Training-Generalized-Reinforcement-Learning-Agents.md)
* [Training with Environment Parameter Randomization](Training-Environment-Parameter-Randomization.md)

You can also compare the
[example environments](Learning-Environment-Examples.md)
Expand Down