This python package is an extension to OpenAI Gym for auxiliary tasks (multitask learning, transfer learning, inverse reinforcement learning, etc.)
- Python 3.5.2
- OpenAI Gym
- MuJoCo (Optional)
- mujoco-py (Optional)
- roboschool (Optional)
Check out the latest code:
git clone https://github.com/Breakend/gym-extensions-multitask.git
pip3 install -e .
Install MuJoCo according to mujoco-py.
- Obtain license for MuJoCo
- Download MuJoCo 1.50 binaries
- Unzip into
mjpro150
directory~/.mujoco/mjproj150
and place license at~/.mujoco/mjkey.txt
- Finally, install gym-extensions with mujoco-py enabled:
pip3 install -e .[mujoco]
nosetests -v gym_extensions
Due to the dependency on OpenAI gym you may have some trouble when installing gym on macOS, to remedy:
# as per: https://github.com/openai/gym/issues/164
export MACOSX_DEPLOYMENT_TARGET=10.12; pip install -e .
Also, if you get the following error:
>>> import matplotlib.pyplot
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/anaconda2/lib/python2.7/site-packages/matplotlib/pyplot.py", line 115, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "~/anaconda2/lib/python2.7/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
globals(),locals(),[backend_name],0)
File "~/anaconda2/lib/python2.7/site-packages/matplotlib/backends/backend_gtk.py", line 19, in <module>
raise ImportError("Gtk* backend requires pygtk to be installed.")
ImportError: Gtk* backend requires pygtk to be installed.
the easiest fix is to switch backends for matplotlib. You can do this by setting backend: TkAgg
in ~/.config/matplotlib/matplotlibrc
or ~/.matplotlib/matplotlibrc
For specific environments (you don't necessarily want to import the whole project)
from gym_extensions.continuous import gym_navigation_2d
env = gym.make("State-Based-Navigation-2d-Map1-Goal1-v0")
from gym_extensions.continuous import mujoco
env = gym.make("HopperWall-v0")
More information will be provided on our doc website: https://breakend.github.io/gym-extensions/
To contributing environments please use the general directory structure we have in place and provide pull requests. We're still working on making this extension to OpenAI gym the best possible so things may change. Any changes to existing environments should involve an incremental update to the name of the environment (i.e. Hopper-v0 vs. Hopper-v1). If you are not associated with McGill and contribute significantly, please add your association to:
docs/index.md
Some of this work borrowed ideas and code from OpenAI rllab and OpenAI Gym. We thank those creators for their work and cite links to reference code inline where possible.
Here's a list of contributors!
Works that have used this framework include:
Klissarov, Martin, Pierre-Luc Bacon, Jean Harb, and Doina Precup. "Learnings Options End-to-End for Continuous Action Tasks." arXiv preprint arXiv:1712.00004 (2017).
Henderson, Peter, Wei-Di Chang, Pierre-Luc Bacon, David Meger, Joelle Pineau, and Doina Precup. "OptionGAN: Learning Joint Reward-Policy Options using Generative Adversarial Inverse Reinforcement Learning." arXiv preprint arXiv:1709.06683 (2017).
If you use this work please use the following citation. If using the Space X environment, please also reference @vBarbaros for credit.
@article{henderson2017multitask,
author = {{Henderson}, P. and {Chang}, W.-D. and {Shkurti}, F. and {Hansen}, J. and
{Meger}, D. and {Dudek}, G.},
title = {Benchmark Environments for Multitask Learning in Continuous Domains},
journal = {ICML Lifelong Learning: A Reinforcement Learning Approach Workshop},
year={2017}
}