Skip to content

Latest commit

 

History

History
295 lines (236 loc) · 14.7 KB

README.md

File metadata and controls

295 lines (236 loc) · 14.7 KB

ros_openpose

ROS wrapper for OpenPose | It supports (currently, but others are planned)-

  • Any color camera such as webcam etc ✔️
  • Intel RealSense Camera ✔️
  • Microsoft Kinect v2 Camera ✔️
  • Stereolabs ZED2 Camera ✔️ (see thanks section)
  • Azure Kinect Camera ✔️

gif showing demo of ros_openpose
Sample video showing visualization on RViz

Overview

  1. Dependencies
  2. Installation
  3. Configuration
  4. Operation Modes and APIs
  5. Camera Run Instructions:
  6. FAQ
  7. Test Configuration
  8. Citation
  9. Issues
  10. Thanks

Dependencies

Note: Additionally, camera-specific ROS drivers such as the following are required as per your camera model-

Installation

  1. Make sure to download the complete repository. Use git clone https://github.com/ravijo/ros_openpose.git or download zip as per your convenience.
  2. Invoke catkin tool inside ros workspace, i.e., catkin_make
  3. Make python scripts executable by using the commands below-
    roscd ros_openpose/scripts
    chmod +x *.py

Troubleshooting

  1. While compiling the package, if the following error is reported at the terminal-
    error: no matching function for call to ‘op::WrapperStructPose::WrapperStructPose(<brace-enclosed initializer list>)’
    
    In this case, please checkout OpenPose version 1.7.0 by running the following command at the root directory of the OpenPose installation-
    git checkout tags/v1.7.0
  2. While compiling the package, if any of the following error is reported at the terminal-
    error: ‘check’ is not a member of ‘op’
    
    error: no match for ‘operator=’ (operand types are ‘op::Matrix’ and ‘const cv::Mat’)
    
    error: invalid initialization of reference of type ‘const op::String&’ from expression of type ‘fLS::clstring {aka std::__cxx11::basic_string<char>}’
    
    In this case, please checkout OpenPose version 1.6.0 by running the following command at the root directory of the OpenPose installation-
    git checkout tags/v1.6.0
    Do not forget to run sudo make install to install the OpenPose system-wide.
  3. If compilation fails by showing the following error-
    /usr/bin/ld: cannot find -lThreads::Threads
    
    In this case, please put the following by editing the CMakeLists.txt
    find_package(Threads REQUIRED)
    
    For more information, please check here.
  4. While compiling the package, if the following error is reported at the terminal-
    error: no match for ‘operator=’ (operand types are ‘op::Matrix’ and ‘const cv::Mat’)
    
    In this case, please update the OpenPose. Most likely, an old version of OpenPose is installed. So please checkout Openpose from the master branch as described here. Alternatively, you can checkout OpenPose version 1.5.1 by running the following command at the root directory of the OpenPose installation-
    git checkout tags/v1.5.1
    Do not forget to run sudo make install to install the OpenPose system-wide.
    Note that OpenPose version 1.5.1 is still supported.

Configuration

The main launch file is run.launch. It has the following important arguments-

  1. model_folder: It represents the full path to the model directory of OpenPose. Kindly modify it as per OpenPose installation in your machine. Please edit run.launch file as shown below-
    <arg name="openpose_args" value="--model_folder /home/ravi/openpose/models/"/>
  2. openpose_args: It is provided to support the standard OpenPose command-line arguments. Please edit run.launch file as shown below-
    <arg name="openpose_args" value="--face --hand"/>
  3. camera: It can only be one of the following: realsense, kinect, zed2, nodepth. Default value of this argument is realsense. See below for more information.

Operation Modes and APIs

  • Synchronous API (see thanks section)
    • Uses op_wrapper.emplaceAndPop() method provided by OpenPose
    • By default this version is disabled. Therefore, please set synchronous:=true and provide py_openpose_path while calling run.launch. For example:
      roslaunch ros_openpose run.launch camera:=realsense synchronous:=true py_openpose_path:=absolute_path_to_py_openpose
    • If the arg py_openpose_path is not specified, then the CPP node is used. Otherwise, the python node is used. Therefore, please compile OpenPose accordingly if you plan to use python bindings of the OpenPose.
  • Asynchronous API
    • Uses two workers, op::WorkerProducer and op::WorkerConsumer workers provided by OpenPose
    • Uses OpenPose CPP APIs
    • By default this version is enabled. Users are advised to try synchronous:=true if not satisfied with the performance.

Camera Run Instructions

In this section, you will find the instructions for running ros_openpose with one of the following cameras: Color camera, Realsense, Kinect v2, Azure Kinect, and ZED2. If you have a different camera and would like to use ros_openpose with depth properties, please turn to the FAQ section for tips and guidance on achieving this.

Steps to Run with any Color Camera such as Webcam etc.

  1. Make sure that ROS env is sourced properly by executing the following command-
    source devel/setup.bash
  2. Start the ROS package of your camera. Basically, this package is going to capture images from your camera, and then it is going to publish those images on a ROS topic. Make sure to set the correct ROS topic to color_topic inside config_nodepth.launch file.
  3. Invoke the main launch file by executing the following command-
    roslaunch ros_openpose run.launch camera:=nodepth

Note: To confirm that ROS package of your camera is working properly, please check if the ROS package is publishing images by executing the following command-

rosrun image_view image_view image:=YOUR_ROSTOPIC

Here YOUR_ROSTOPIC must have the same value as color_topic.

Steps to Run with Intel RealSense Camera

  1. Make sure that ROS env is sourced properly by executing the following command-
    source devel/setup.bash
  2. Invoke the main launch file by executing the following command-
    roslaunch ros_openpose run.launch

Steps to Run with Microsoft Kinect v2 Camera

  1. Make sure that ROS env is sourced properly by executing the following command-
    source devel/setup.bash
  2. Invoke the main launch file by executing the following command-
    roslaunch ros_openpose run.launch camera:=kinect

Steps to Run with Azure Kinect Camera

  1. Make sure that ROS env is sourced properly by executing the following command-
    source devel/setup.bash
  2. Invoke the main launch file by executing the following command-
    roslaunch ros_openpose run.launch camera:=azurekinect

Steps to Run with Stereolabs ZED2 Camera

  1. Change the parameter openni_depth_mode in zed-ros-wrapper/zed_wrapper/params/common.yaml to true (default is false).
  2. Make sure that ROS env is sourced properly by executing the following command-
    source devel/setup.bash
  3. Invoke the main launch file by executing the following command-
    roslaunch ros_openpose run.launch camera:=zed2

FAQ

  1. How to add my own depth camera into this wrapper?

    You might be able to add your own depth camera by creating your own config_<camera_name>.launch file based on one of the existing ones and modify it to suit your specific camera. Go inside the launch subdirectory and make a copy of config_realsense.launch and save it as config_<camera_name>.launch. Remember that whatever you choose as the camera_name, should be used as an argument when launching the run.launch to run ros_openpose. Make necessary changes to the color_topic, depth_topic, cam_info_topic, and frame_id arguments in the file. Make sure that:

    • Input depth images are aligned to the color images already.
    • Depth and color images have the same dimension. Therefore, each pixel from the color image can be mapped to its corresponding depth pixel at the same x, y location.
    • The depth images contain depth values in millimeters and are represented by TYPE_16UC1 using OpenCV.
    • The cam_info_topic is containing camera calibration parameters supplied by the manufacturer.

    To achieve visualizations, you also need to create new modified versions of the rviz scripts only_person_<camera_name>.rviz and person_pointcloud_<camera_name>.rviz.

    Please check here for a similar question.

    If you successfully create modified files and run ros_openpose with a depth camera that is not mentioned here, please share your files and the necessary steps for running with your camera. This useful information can be made available to others.

  2. How to run this wrapper with limited resources such as low GPU, RAM, etc.?

    Below is a brief explanation of the ros_openpose package. This package does not use GPU directly. However, it depends on OpenPose, which uses GPU heavily. It contains a few ROS subscribers, which copies data from the camera using ROS. Next, it employs two workers, namely input and output workers. The job of the input worker is to provide color images to the OpenPose, whereas the role of the output worker is to receive the keypoints detected in 2D (pixel) space. The output worker then converts 2D pixels to 3D coordinates. The input worker waits for 10 milliseconds if the camera provides no new frames, and then it checks again if no new frame is available. If not, then wait for 10 milliseconds, and the cycle continues. In this way, we ensure that the CPU gets some time to sleep (indirectly lowering the CPU usage).

    • If the CPU usage are high, try increasing the sleep value (SLEEP_MS) as defined here.
    • Try reducing the --net_resolution and by using --model_pose COCO.
    • Try disabling multithreading in OpenPose software simply by supplying --disable_multi_thread to openpose_args inside run.launch file.
    • Another easiest way is to decrease the FPS of your camera. Please try to lower it down as per your limitations.

    Please check here for a similar question.

  3. How to find the version of the OpenPose installed on my machine?

    Please use the shell script get_openpose_version.sh as shown below-

    sh get_openpose_version.sh

    You can use cmake as well. See here

Note on Test Configuration

This package has been tested on the following environment configuration-

Name Value
OS Ubuntu 14.04.6 LTS (64-bit)
RAM 16 GB
Processor Intel® Core™ i7-7700 CPU @ 3.60GHz × 8
Kernel Version 4.4.0-148-generic
ROS Indigo
GCC Version 5.5.0
OpenCV Version 2.4.8
OpenPose Version 1.5.1
GPU GeForce GTX 1080
CUDA Version 8.0.61
cuDNN Version 5.1.10

Citation

If you used ros_openpose for your work, please cite it.

@misc{ros_openpose,
    author = {Joshi, Ravi P. and Choi, Andrew and Tan, Xiang Zhi and Van den Broek, Marike K and Luo, Rui and Choi, Brian},
    title = {{ROS OpenPose}},
    year = {2019},
    publisher = {GitHub},
    journal = {GitHub Repository},
    howpublished = {\url{https://github.com/ravijo/ros_openpose}}
}

Issues (or Error Reporting)

Please check here and create issues accordingly.

Thanks

Following authors are sincerely acknowledged for the improvements of this package-