-
Notifications
You must be signed in to change notification settings - Fork 105
Home
Welcome to the documentation for the Robot Parkour Learning Wiki!
Suppose you are familiar with how legged_gym works. There are only a few updates that can get you quickly acquainted with how this repository works.
-
We follow the
task_registry
mechanism onlegged_gym
. So it is the same command to run different tasks. -
For each experiment run, all the data including complete configurations (no matter how you change your
config.py
file) will be logged into thelogdir
. It can always be found inlogs/{experiment_name}/{logdir}
, where{logdir}
typically starts with a datetime. Then you can view the curves using a tensorboard. -
The loss computation of PPO is extracted from the original
update
function. Thus, you can implement and improve the algorithm by directly inheriting thePPO
class, registering it inrsl_rl/algorithms/__init__.py
, and invoking it by settingalgorithm_class_name
in your config file. -
The terrain is re-implemented. But it still follows the grid principle. This means you can still get the attributes like
env_origins
, etc. -
The observation is re-implemented. Considering observation is always handled as a 1D vector in the rollout storage and observation can be multi-modal (vision in 2D, proprioception in 1D, obstacle ID in one hot), we introduced a new object named
obs_segments
. It is anOrderedDict
that tells you the shape of each segment in the entire 1D observation vector.Check line 670
get_obs_segment_from_components
inlegged_robot_field.py
as the example. Also, check line 83get_obs_slice
inrsl_rl/utils/utils.py
for how to decode theobs_segments
into to slice object.