ROS 2 integration for roboreg.
- Install roboreg version
0.4.5:
Note
When using differentiable rendering, CUDA Toolkit is required at runtime, refer CUDA Toolkit Install Instructions.
pip3 install roboreg==0.4.5- Build this
roboregROS 2 integration
mkdir -p lbr-stack/src && cd lbr-stack
git clone https://github.com/lbr-stack/ros2_roboreg.git -b rolling src/ros2_roboreg
rosdep install --from-paths src -r -i -y
colcon build --symlink-installThis node performs eye-to-hand calibration from RGB images and corresponding depth images. This does e.g. apply to
To run, simply do
ros2 launch roboreg_nodes reg.launch.py mode:=monocular_depthSample configurations are provided in monocular_depth.yaml. Please note that compressed image / depth topics are also supported.
/camera/image_rect_color/camera/image_rect_color/camera_info/camera/depth_registered/camera/depth_registered/camera_info/joint_states/robot_description
/camera/image_rect_color/render
collect_dataclear_dataregister/hydra_icpexport/dataexport/transformimport/transformbroadcast_transform
Utility node for executing trajectory via ros2_control and collecting samples via roboreg_nodes.
ros2 run roboreg_nodes autoreg --helpUtility node for publishing static transform as acquired through roboreg_nodes.
ros2 run roboreg_nodes broadcaster --helpWe would further like to acknowledge following supporters:
| Logo | Notes |
|---|---|
| This work was supported by core and project funding from the Wellcome/EPSRC [WT203148/Z/16/Z; NS/A000049/1; WT101957; NS/A000027/1]. | |
| This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101016985 (FAROS project). | |
| Built at RViMLab. | |
| Built at CAI4CAI. | |
| Built at King's College London. |