Why Safety-Gymnasium? | Documentation | Install guide | Customization | Future Plan
This library is currently under heavy development - if you have suggestions on the API or use-cases you'd like to be covered, please open an github issue or reach out. We'd love to hear about how you're using the library.
Safety-Gymnasium is a highly scalable and customizable Safe Reinforcement Learning library, aiming to deliver a good view of benchmarking Safe Reinforcement Learning (Safe RL) algorithms and a more standardized setting of environments. We provide a set of standard API which is compatible with information on constraints. Users can explore new insights via an elegant code framework and well-designed environments.
Here we provide a table for comparison of Safety-Gymnasium and existing SafeRL Environments libraries.
SafeRL Envs |
Engine | Vectorized Environments |
New Gym API(3) | Vision Input |
---|---|---|---|---|
Safety Gym |
mujoco-py (1) |
❌ | ❌ | minimally supported |
safe-control-gym |
PyBullet | ❌ | ❌ | ❌ |
Velocity-Constraints(2) | N/A | ❌ | ❌ | ❌ |
mujoco-circle |
PyTorch | ❌ | ❌ | ❌ |
Safety Gymnasium |
MuJoCo 2.3.0+ | ✅ | ✅ | ✅ |
(1): Maintenance (expect bug fixes and minor updates); the last commit is 19 Nov 2021. Safety Gym depends on mujoco-py
2.0.2.7, which was updated on Oct 12, 2019.
(2): There is no official library for speed-related environments, and its associated cost constraints are constructed from info. But the task is widely used in the study of SafeRL, and we encapsulate it in Safety-Gymnasium.
(3): In the gym 0.26.0 release update, a new API of interaction was redefined.
We designed a variety of safety-enhanced learning tasks and integrated the contributions of RL community:safety-velocity
, safety-run
, safety-circle
, safety-goal
, safety-button
, etc, leading to a unified safety-enhanced learning benchmark environment library called Safety-Gymnasium.
Further, to facilitate the progress of community research, we redesigned Safety Gym and removed the dependency on mujoco-py
. We built it on top of MuJoCo and fixed some bugs, more specific bug report can refer to Safety Gym's BUG Report.
Here is a list of all the environments we support for now; some are being tested in our baselines, and we will gradually disclose it in the later updates.
Category | Task | Agent |
---|---|---|
Safe Navigation | Goal[012] | Point, Car, Racecar, Ant |
Button[012] | ||
Push[012] | ||
Circle[012] | ||
Safe Velocity | Velocity | HalfCheetah, Hopper, Swimmer, Walker2d, Ant, Humanoid |
Here are some pictures about tasks in Safe Navigation.
![]() Point |
![]() Car |
![]() Racecar |
![]() Ant |
![]() Coming soon… |
![]() Coming soon… |
![]() Goal0 |
![]() Goal1 |
![]() Goal2 |
![]() Button0 |
![]() Button1 |
![]() Button2 |
![]() Push0 |
![]() Push1 |
![]() Push2 |
![]() Circle0 |
![]() Circle1 |
![]() Circle2 |
![]() Coming soon… |
![]() Coming soon… |
![]() Coming soon… |
Vision-based safety reinforcement learning lacks realistic scenarios. Although the original Safety Gym
could minimally support visual input, the scenarios were too homogeneous. To facilitate the validation of visual-based safety reinforcement learning algorithms, we have developed a set of realistic vision-based SafeRL tasks, which are currently being validated on the baseline. In the later updates, we will release that part of the environment of Safety-Gymnasium.
For the appetizer, the images are as follows:
Notes: We support explicitly express cost based on Gymnasium APIs.
import safety_gymnasium
env_name = 'SafetyPointGoal1-v0'
env = safety_gymnasium.make(env_name)
obs, info = env.reset()
terminated = False
while not terminated:
act = env.action_space.sample()
obs, reward, cost, terminated, truncated, info = env.step(act)
env.render()
pip install safety-gymnasium
conda create -n <virtual-env-name> python=3.8
conda activate <virtual-env-name>
git clone git@github.com:PKU-MARL/safety-gymnasium.git
cd Safety-Gymnasium
pip install -e .
We construct a highly expandable framework of code so that you can easily comprehend it and design your own environments to facilitate your research with no more than 100 lines of code on average.
For details, please refer to our documentation. Here is a minimal example:
# import the objects you want to use
# or you can define specific objects by yourself, just make sure obeying our specification
from safety_gymnasium.assets.geoms import Apples
from safety_gymnasium.bases import BaseTask
# inherit the basetask
class MytaskLevel0(BaseTask):
def __init__(self, config):
super().__init__(config=config)
# define some properties
self.num_steps = 500
self.agent.placements = [(-0.8, -0.8, 0.8, 0.8)]
self.agent.keepout = 0
self.lidar_conf.max_dist = 6
# add objects into environments
self.add_geoms(Apples(num=2, size=0.3))
def calculate_reward(self):
# implement your reward function
# Note: cost calculation is based on objects, so it's automatic
reward = 1
return reward
def specific_reset(self):
# depending on your task
def specific_step(self):
# depending on your task
def update_world(self):
# depending on your task
@property
def goal_achieved(self):
# depending on your task
- Vision-based environments
- Bring in other robots
- Tested on different python versions
- Tested on windows and mac
Safety-Gymnasium is released under Apache License 2.0.