Skip to content

Release 0.12.1 doc fixes #3070

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Dec 10, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions docs/Installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,27 @@ The `UnitySDK` subdirectory contains the Unity Assets to add to your projects.
It also contains many [example environments](Learning-Environment-Examples.md)
to help you get started.

### Package Installation

If you intend to copy the `UnitySDK` folder in to your project, ensure that
you have the [Barracuda preview package](https://docs.unity3d.com/Packages/com.unity.barracuda@0.3/manual/index.html) installed.

To install the Barracuda package in Unity **2017.4.x**, you will have to copy the
`UnityPackageManager` folder under the `UnitySDK` folder to the root directory of your
project.

To install the Barrcuda package in later versions of Unity, navigate to the Package
Manager window by navigating to the menu `Window` -> `Package Manager`. Click on the
`Adavanced` dropdown menu to the left of the search bar and make sure "Show Preview Packages"
is checked. Search for or select the `Barracuda` package and install the latest version.

<p align="center">
<img src="images/barracuda-package.png"
alt="Barracuda Package Manager"
width="710" border="10"
height="569" />
</p>

The `ml-agents` subdirectory contains a Python package which provides deep reinforcement
learning trainers to use with Unity environments.

Expand Down
3 changes: 1 addition & 2 deletions docs/Learning-Environment-Create-New.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,7 @@ steps:
calculate the rewards used for reinforcement training. You can also implement
optional methods to reset the Agent when it has finished or failed its task.
4. Add your Agent subclasses to appropriate GameObjects, typically, the object
in the scene that represents the Agent in the simulation. Each Agent object
must be assigned a Brain object.
in the scene that represents the Agent in the simulation.

**Note:** If you are unfamiliar with Unity, refer to
[Learning the interface](https://docs.unity3d.com/Manual/LearningtheInterface.html)
Expand Down
1 change: 1 addition & 0 deletions docs/Migrating.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ The versions can be found in
* If you use RayPerception3D, replace it with RayPerceptionSensorComponent3D (and similarly for 2D). The settings, such as ray angles and detectable tags, are configured on the component now.
RayPerception3D would contribute `(# of rays) * (# of tags + 2)` to the State Size in Behavior Parameters, but this is no longer necessary, so you should reduce the State Size by this amount.
Making this change will require retraining your model, since the observations that RayPerceptionSensorComponent3D produces are different from the old behavior.
* If you see messages such as `The type or namespace 'Barracuda' could not be found` or `The type or namespace 'Google' could not be found`, you will need to [install the Barracuda preview package](Installation.md#package-installation).

## Migrating from ML-Agents toolkit v0.10 to v0.11.0

Expand Down
6 changes: 3 additions & 3 deletions docs/Training-ML-Agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@ expert in the same situation.
The output of the training process is a model file containing the optimized
policy. This model file is a TensorFlow data graph containing the mathematical
operations and the optimized weights selected during the training process. You
can use the generated model file with the Learning Brain type in your Unity
project to decide the best course of action for an agent.
can set the generated model file in the Behaviors Parameters under your
Agent in your Unity project to decide the best course of action for an agent.

Use the command `mlagents-learn` to train your agents. This command is installed
with the `mlagents` package and its implementation can be found at
`ml-agents/mlagents/trainers/learn.py`. The [configuration file](#training-config-file),
like `config/trainer_config.yaml` specifies the hyperparameters used during training.
You can edit this file with a text editor to add a specific configuration for
each Brain.
each Behavior.

For a broader overview of reinforcement learning, imitation learning and the
ML-Agents training process, see [ML-Agents Toolkit
Expand Down
Binary file added docs/images/barracuda-package.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion gym-unity/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ import gym
from baselines import deepq
from baselines import logger

from gym_unity.envs.unity_env import UnityEnv
from gym_unity.envs import UnityEnv

def main():
env = UnityEnv("./envs/GridWorld", 0, use_visual=True, uint8_visual=True)
Expand Down