Perception 3D is graph-based framework allowing user to develope applications for mobile robots, such as path planning, marking/clearing obstacles, creating no-enter/speed limit layer. You can reference:
Global planning in 3D map | Marking/Tracking/Clearing | Speed-limit/no-enter zone |
Perception 3D:
- Sensor support:
- Multilayer spinning lidar (Velodyne/Ouster/Leishen)
- Depth camera (Realsense/oak)
- Scanning Lidar (Livox mid-360/Unitree 4D LiDAR L1)
- Zone feature support:
- Static layer
- Speed limit layer
- No enter layer
Click me to see tutorial
The package runs in the docker, so we need to build the image first. We support both x64 (tested in intel NUC) and arm64 (tested in nvidia jetson jpack5.1.3/6).
cd ~
git clone https://github.com/dddmobilerobot/dddmr_navigation.git
cd ~/dddmr_navigation && git submodule init && git submodule update
cd ~/dddmr_navigation/dddmr_docker/docker_file && ./build.bash
ROS2 bag that contains multilayer lidar from Leishen C16 will be download to run the demo.
cd ~/dddmr_navigation/src/dddmr_perception_3d && ./download_files.bash
[!NOTE] The following command will create an interactive docker container using the image we built. We will launch the demo manually in the container.
cd ~/dddmr_navigation/dddmr_docker && ./run_demo.bash
The bag file will be auto-played after 3 seconds when launching.
cd ~/dddmr_navigation && source /opt/ros/humble/setup.bash && colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release
source install/setup.bash
ros2 launch perception_3d multilayer_spinning_lidar_3d_ros_launch.py
Click me to see tutorial
The package runs in the docker, so we need to build the image first. We support both x64 (tested in intel NUC) and arm64 (tested in nvidia jetson jpack5.1.3/6).
cd ~
git clone https://github.com/dddmobilerobot/dddmr_navigation.git
cd ~/dddmr_navigation && git submodule init && git submodule update
cd ~/dddmr_navigation/dddmr_docker/docker_file && ./build.bash
ROS2 bag that contains depth images from two cameras will be download to run the demo.
cd ~/dddmr_navigation/src/dddmr_perception_3d && ./download_files.bash
[!NOTE] The following command will create an interactive docker container using the image we built. We will launch the demo manually in the container.
cd ~/dddmr_navigation/dddmr_docker && ./run_demo.bash
The bag file will be auto-played after 3 seconds when launching.
cd ~/dddmr_navigation && source /opt/ros/humble/setup.bash && colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release
source install/setup.bash
ros2 launch perception_3d multi_depth_camera_3d_ros_launch.py
Click me to see tutorial
The package runs in the docker, so we need to build the image first. We support both x64 (tested in intel NUC) and arm64 (tested in nvidia jetson jpack5.1.3/6).
cd ~
git clone https://github.com/dddmobilerobot/dddmr_navigation.git
cd ~/dddmr_navigation && git submodule init && git submodule update
cd ~/dddmr_navigation/dddmr_docker/docker_file && ./build.bash
ROS2 bag that contains depth images from two cameras will be download to run the demo.
cd ~/dddmr_navigation/src/dddmr_perception_3d && ./download_files.bash
[!NOTE] The following command will create an interactive docker container using the image we built. We will launch the demo manually in the container.
cd ~/dddmr_navigation/dddmr_docker && ./run_demo.bash
The bag file will be auto-played after 3 seconds when launching.
cd ~/dddmr_navigation && source /opt/ros/humble/setup.bash && colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release
source install/setup.bash
ros2 launch perception_3d scanning_lidar_3d_ros_launch.py