Skip to content

Commit 9e091a8

Browse files
author
Chris Elion
authored
Release 0.12.1 doc fixes (#3070)
1 parent 557488c commit 9e091a8

File tree

6 files changed

+27
-6
lines changed

6 files changed

+27
-6
lines changed

docs/Installation.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,27 @@ The `UnitySDK` subdirectory contains the Unity Assets to add to your projects.
3333
It also contains many [example environments](Learning-Environment-Examples.md)
3434
to help you get started.
3535

36+
### Package Installation
37+
38+
If you intend to copy the `UnitySDK` folder in to your project, ensure that
39+
you have the [Barracuda preview package](https://docs.unity3d.com/Packages/com.unity.barracuda@0.3/manual/index.html) installed.
40+
41+
To install the Barracuda package in Unity **2017.4.x**, you will have to copy the
42+
`UnityPackageManager` folder under the `UnitySDK` folder to the root directory of your
43+
project.
44+
45+
To install the Barrcuda package in later versions of Unity, navigate to the Package
46+
Manager window by navigating to the menu `Window` -> `Package Manager`. Click on the
47+
`Adavanced` dropdown menu to the left of the search bar and make sure "Show Preview Packages"
48+
is checked. Search for or select the `Barracuda` package and install the latest version.
49+
50+
<p align="center">
51+
<img src="images/barracuda-package.png"
52+
alt="Barracuda Package Manager"
53+
width="710" border="10"
54+
height="569" />
55+
</p>
56+
3657
The `ml-agents` subdirectory contains a Python package which provides deep reinforcement
3758
learning trainers to use with Unity environments.
3859

docs/Learning-Environment-Create-New.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,7 @@ steps:
2626
calculate the rewards used for reinforcement training. You can also implement
2727
optional methods to reset the Agent when it has finished or failed its task.
2828
4. Add your Agent subclasses to appropriate GameObjects, typically, the object
29-
in the scene that represents the Agent in the simulation. Each Agent object
30-
must be assigned a Brain object.
29+
in the scene that represents the Agent in the simulation.
3130

3231
**Note:** If you are unfamiliar with Unity, refer to
3332
[Learning the interface](https://docs.unity3d.com/Manual/LearningtheInterface.html)

docs/Migrating.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ The versions can be found in
2222
* If you use RayPerception3D, replace it with RayPerceptionSensorComponent3D (and similarly for 2D). The settings, such as ray angles and detectable tags, are configured on the component now.
2323
RayPerception3D would contribute `(# of rays) * (# of tags + 2)` to the State Size in Behavior Parameters, but this is no longer necessary, so you should reduce the State Size by this amount.
2424
Making this change will require retraining your model, since the observations that RayPerceptionSensorComponent3D produces are different from the old behavior.
25+
* If you see messages such as `The type or namespace 'Barracuda' could not be found` or `The type or namespace 'Google' could not be found`, you will need to [install the Barracuda preview package](Installation.md#package-installation).
2526

2627
## Migrating from ML-Agents toolkit v0.10 to v0.11.0
2728

docs/Training-ML-Agents.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,15 +14,15 @@ expert in the same situation.
1414
The output of the training process is a model file containing the optimized
1515
policy. This model file is a TensorFlow data graph containing the mathematical
1616
operations and the optimized weights selected during the training process. You
17-
can use the generated model file with the Learning Brain type in your Unity
18-
project to decide the best course of action for an agent.
17+
can set the generated model file in the Behaviors Parameters under your
18+
Agent in your Unity project to decide the best course of action for an agent.
1919

2020
Use the command `mlagents-learn` to train your agents. This command is installed
2121
with the `mlagents` package and its implementation can be found at
2222
`ml-agents/mlagents/trainers/learn.py`. The [configuration file](#training-config-file),
2323
like `config/trainer_config.yaml` specifies the hyperparameters used during training.
2424
You can edit this file with a text editor to add a specific configuration for
25-
each Brain.
25+
each Behavior.
2626

2727
For a broader overview of reinforcement learning, imitation learning and the
2828
ML-Agents training process, see [ML-Agents Toolkit

docs/images/barracuda-package.png

79.4 KB
Loading

gym-unity/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ import gym
108108
from baselines import deepq
109109
from baselines import logger
110110

111-
from gym_unity.envs.unity_env import UnityEnv
111+
from gym_unity.envs import UnityEnv
112112

113113
def main():
114114
env = UnityEnv("./envs/GridWorld", 0, use_visual=True, uint8_visual=True)

0 commit comments

Comments
 (0)