Skip to content

Commit

Permalink
move env docs around (openai#1399)
Browse files Browse the repository at this point in the history
* move env docs around

* rst

* de-rst

* headers
  • Loading branch information
christopherhesse authored Mar 22, 2019
1 parent 946d233 commit 13e6ec8
Show file tree
Hide file tree
Showing 5 changed files with 262 additions and 222 deletions.
113 changes: 2 additions & 111 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -159,118 +159,9 @@ maintain the lists of dependencies on a per-environment group basis.
Environments
============

The code for each environment group is housed in its own subdirectory
`gym/envs
<https://github.com/openai/gym/blob/master/gym/envs>`_. The
specification of each task is in `gym/envs/__init__.py
<https://github.com/openai/gym/blob/master/gym/envs/__init__.py>`_. It's
worth browsing through both.

Algorithmic
-----------

These are a variety of algorithmic tasks, such as learning to copy a
sequence.

.. code:: python
import gym
env = gym.make('Copy-v0')
env.reset()
env.render()
Atari
-----

The Atari environments are a variety of Atari video games. If you didn't do the full install, you can install dependencies via ``pip install -e '.[atari]'`` (you'll need ``cmake`` installed) and then get started as follows:

.. code:: python
import gym
env = gym.make('SpaceInvaders-v0')
env.reset()
env.render()
This will install ``atari-py``, which automatically compiles the `Arcade Learning Environment <http://www.arcadelearningenvironment.org/>`_. This can take quite a while (a few minutes on a decent laptop), so just be prepared.

Box2d
-----------

Box2d is a 2D physics engine. You can install it via ``pip install -e '.[box2d]'`` and then get started as follows:

.. code:: python
import gym
env = gym.make('LunarLander-v2')
env.reset()
env.render()
Classic control
---------------

These are a variety of classic control tasks, which would appear in a typical reinforcement learning textbook. If you didn't do the full install, you will need to run ``pip install -e '.[classic_control]'`` to enable rendering. You can get started with them via:

.. code:: python
import gym
env = gym.make('CartPole-v0')
env.reset()
env.render()
MuJoCo
------

`MuJoCo <http://www.mujoco.org/>`_ is a physics engine which can do
very detailed efficient simulations with contacts. It's not
open-source, so you'll have to follow the instructions in `mujoco-py
<https://github.com/openai/mujoco-py#obtaining-the-binaries-and-license-key>`_
to set it up. You'll have to also run ``pip install -e '.[mujoco]'`` if you didn't do the full install.

.. code:: python
import gym
env = gym.make('Humanoid-v2')
env.reset()
env.render()
Robotics
------

`MuJoCo <http://www.mujoco.org/>`_ is a physics engine which can do
very detailed efficient simulations with contacts and we use it for all robotics environments. It's not
open-source, so you'll have to follow the instructions in `mujoco-py
<https://github.com/openai/mujoco-py#obtaining-the-binaries-and-license-key>`_
to set it up. You'll have to also run ``pip install -e '.[robotics]'`` if you didn't do the full install.

.. code:: python
import gym
env = gym.make('HandManipulateBlock-v0')
env.reset()
env.render()
You can also find additional details in the accompanying `technical report <https://arxiv.org/abs/1802.09464>`_ and `blog post <https://blog.openai.com/ingredients-for-robotics-research/>`_.
If you use these environments, you can cite them as follows::

@misc{1802.09464,
Author = {Matthias Plappert and Marcin Andrychowicz and Alex Ray and Bob McGrew and Bowen Baker and Glenn Powell and Jonas Schneider and Josh Tobin and Maciek Chociej and Peter Welinder and Vikash Kumar and Wojciech Zaremba},
Title = {Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research},
Year = {2018},
Eprint = {arXiv:1802.09464},
}

Toy text
--------

Toy environments which are text-based. There's no extra dependency to install, so to get started, you can just do:

.. code:: python
See `List of Environments <docs/environments.md>`_.

import gym
env = gym.make('FrozenLake-v0')
env.reset()
env.render()
For information on creating your own environments, see `Creating your own Environments <docs/creating-environments.md>`_.

Examples
========
Expand Down
96 changes: 96 additions & 0 deletions docs/creating-environments.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
# How to create new environments for Gym

* Create a new repo called gym-foo, which should also be a PIP package.

* A good example is https://github.com/openai/gym-soccer.

* It should have at least the following files:
```sh
gym-foo/
README.md
setup.py
gym_foo/
__init__.py
envs/
__init__.py
foo_env.py
foo_extrahard_env.py
```

* `gym-foo/setup.py` should have:

```python
from setuptools import setup
setup(name='gym_foo',
version='0.0.1',
install_requires=['gym'] # And any other dependencies foo needs
)
```

* `gym-foo/gym_foo/__init__.py` should have:
```python
from gym.envs.registration import register
register(
id='foo-v0',
entry_point='gym_foo.envs:FooEnv',
)
register(
id='foo-extrahard-v0',
entry_point='gym_foo.envs:FooExtraHardEnv',
)
```

* `gym-foo/gym_foo/envs/__init__.py` should have:
```python
from gym_foo.envs.foo_env import FooEnv
from gym_foo.envs.foo_extrahard_env import FooExtraHardEnv
```

* `gym-foo/gym_foo/envs/foo_env.py` should look something like:
```python
import gym
from gym import error, spaces, utils
from gym.utils import seeding
class FooEnv(gym.Env):
metadata = {'render.modes': ['human']}
def __init__(self):
...
def step(self, action):
...
def reset(self):
...
def render(self, mode='human'):
...
def close(self):
...
```

* After you have installed your package with `pip install -e gym-foo`, you can create an instance of the environment with `gym.make('gym_foo:foo-v0')`

## How to add new environments to Gym, within this repo (not recommended for new environments)

1. Write your environment in an existing collection or a new collection. All collections are subfolders of `/gym/envs`.
2. Import your environment into the `__init__.py` file of the collection. This file will be located at `/gym/envs/my_collection/__init__.py`. Add `from gym.envs.my_collection.my_awesome_env import MyEnv` to this file.
3. Register your env in `/gym/envs/__init__.py`:

```
register(
id='MyEnv-v0',
entry_point='gym.envs.my_collection:MyEnv',
)
```

4. Add your environment to the scoreboard in `/gym/scoreboard/__init__.py`:

```
add_task(
id='MyEnv-v0',
summary="Super cool environment",
group='my_collection',
contributor='mygithubhandle',
)
```
Loading

0 comments on commit 13e6ec8

Please sign in to comment.