Skip to content

Commit

Permalink
Merge pull request #81 from tryolabs/demos/update-openpose
Browse files Browse the repository at this point in the history
Demos/update openpose
  • Loading branch information
facundo-lezama authored Jun 8, 2022
2 parents 5127c74 + 36a0410 commit d98926a
Show file tree
Hide file tree
Showing 4 changed files with 115 additions and 63 deletions.
2 changes: 1 addition & 1 deletion demos/keypoints_bounding_boxes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ An example of how to use Norfair to track objects from multiple classes using bo

1. Install YOLOv5 with `pip install yolov5`.
2. Install Norfair with `pip install norfair[video]`.
3. Install [OpenPose version 1.7](https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases/tag/v1.7.0). You can follow [this](./install_openpose.ipynb) instructions to install and compile OpenPose.
3. Install [OpenPose version 1.7](https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases/tag/v1.7.0). You can follow [these](../openpose/openpose_extrapolation.ipynb) instructions to install and compile OpenPose.
4. Download the [example video](https://user-images.githubusercontent.com/92468171/162247647-d4c13cdd-a127-455e-967f-531e24cf20cb.mp4) with `wget "https://user-images.githubusercontent.com/92468171/162247647-d4c13cdd-a127-455e-967f-531e24cf20cb.mp4" -O production_ID_4791196_10s.mp4`
5. Run `python keypoints_bounding_boxes_demo.py production_ID_4791196_10s.mp4 --classes 1 2 3 5 --track_points bbox --conf_thres 0.4`.

Expand Down
17 changes: 11 additions & 6 deletions demos/openpose/README.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,24 @@
# Speed OpenPose inference using tracking

Demo for extrapolating detections through skipped frames. Based on [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) version 1.4.
Demo for extrapolating detections through skipped frames. Based on [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) version 1.7.

## Instructions

1. Install Norfair with `pip install norfair[video]`.
2. Install [OpenPose version 1.4](https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases/tag/v1.4.0).
3. Run `python openpose_extrapolation.py`.
2. Install [OpenPose version 1.7](https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases/tag/v1.7.0). You can follow [these](./openpose_extrapolation.ipynb) instructions to install and compile OpenPose.
3. Run `python openpose_extrapolation.py <video file> --skip-frame 5`.
4. Use additional arguments `--skip-frame`, `--select-gpu` as you wish.

Alternatively the example can be executed entirely within `openpose_extrapolation.ipynb`.

## Explanation

If you just want to speed up inference on a detector, you can make your detector skip frames, and use Norfair to extrapolate the detections through these skipped frames.

In this example, we are skipping 2 out of every 3 frames, which should make the video process 3 times faster. This is because the time added by running the Norfair itself is negligible when compared to not having to run 2 inferences on a deep neural network.
In this example, we are skipping 4 out of every 5 frames, which should make the video process 5 times faster. This is because the time added by running the Norfair itself is negligible when compared to not having to run 4 inferences on a deep neural network.

This is how the results look like (original videos can be found at [Kaggle](https://www.kaggle.com/datasets/ashayajbani/oxford-town-centre?select=TownCentreXVID.mp4)):

This is how the results look like:
![openposev17_1_skip_5_frames_short](https://user-images.githubusercontent.com/92468171/172702968-ae986ecc-9cfd-4cd2-9132-92c19ff36608.gif)

![openpose_skip_3_frames](../../docs/openpose_skip_3_frames.gif)
![openposev17_2_skip_5_frames_short](https://user-images.githubusercontent.com/92468171/172703046-e769a9fa-4c0e-4111-9478-eb2d8ad2cead.gif)
1 change: 1 addition & 0 deletions demos/openpose/openpose_extrapolation.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"cells":[{"cell_type":"markdown","metadata":{"id":"gBiZPFydJozY"},"source":["# OpenPose Demo"]},{"cell_type":"markdown","metadata":{},"source":["## Install Norfair"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["!pip install norfair[video]"]},{"cell_type":"markdown","metadata":{},"source":["## Build OpenPose"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"jLe_ckKB0gKJ"},"outputs":[],"source":["# cmake ~ 30min\n","! wget -c \"https://github.com/Kitware/CMake/releases/download/v3.13.4/cmake-3.13.4.tar.gz\"\n","! tar xf cmake-3.13.4.tar.gz\n","! cd cmake-3.13.4 && ./configure && make && sudo make install"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"vsgRGv2A0rS9"},"outputs":[],"source":["# Basic ~ 2min\n","! sudo apt-get --assume-yes update\n","! sudo apt-get --assume-yes install build-essential\n","# OpenCV\n","! sudo apt-get --assume-yes install libopencv-dev\n","# General dependencies\n","! sudo apt-get --assume-yes install libatlas-base-dev libprotobuf-dev libleveldb-dev libsnappy-dev libhdf5-serial-dev protobuf-compiler\n","! sudo apt-get --assume-yes install --no-install-recommends libboost-all-dev\n","# Remaining dependencies, 14.04\n","! sudo apt-get --assume-yes install libgflags-dev libgoogle-glog-dev liblmdb-dev\n","# Python2 libs\n","! sudo apt-get --assume-yes install python-setuptools python-dev build-essential\n","! sudo easy_install pip\n","! sudo -H pip install --upgrade numpy protobuf opencv-python\n","# Python3 libs\n","! sudo apt-get --assume-yes install python3-setuptools python3-dev build-essential\n","! sudo apt-get --assume-yes install python3-pip\n","! sudo -H pip3 install --upgrade numpy protobuf opencv-python\n","# OpenCV 2.4 -> Added as option\n","# # sudo apt-get --assume-yes install libopencv-dev\n","# OpenCL Generic\n","! sudo apt-get --assume-yes install opencl-headers ocl-icd-opencl-dev\n","! sudo apt-get --assume-yes install libviennacl-dev\n"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"b7R8mCWqG-_J"},"outputs":[],"source":["# Clone Openpose\n","! git clone --depth 1 https://github.com/CMU-Perceptual-Computing-Lab/openpose.git "]},{"cell_type":"code","execution_count":null,"metadata":{"id":"ITisAwVgudc1"},"outputs":[],"source":["# Get Openpose model data ~ 2min\n","! cd openpose/models && chmod +r ./getModels.sh && sh getModels.sh"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"sUBlbMMCu7n0"},"outputs":[],"source":["# Build Openpose ~20min\n","! sed -i 's/execute_process(COMMAND git checkout master WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}\\/3rdparty\\/caffe)/execute_process(COMMAND git checkout f019d0dfe86f49d1140961f8c7dec22130c83154 WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}\\/3rdparty\\/caffe)/g' openpose/CMakeLists.txt\n","! cd openpose && rm -r build || true && mkdir build && cd build && cmake -DBUILD_PYTHON=ON .. && make -j`nproc`"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"fY6ntxWZyoMJ"},"outputs":[],"source":["# example demo usage ~1min\n","! cd openpose && ./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_json ./output1/ --display 0 --write_video ./output1/openpose.avi"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"ApZyAjuEXiyk"},"outputs":[],"source":["# python example demo usage ~1min\n","! cd ./openpose/build/examples/tutorial_api_python && python3 01_body_from_image.py"]},{"cell_type":"markdown","metadata":{},"source":["## Run Demo"]},{"cell_type":"markdown","metadata":{},"source":["Get example code and videos:"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["!wget \"https://raw.githubusercontent.com/tryolabs/norfair/master/demos/openpose/openpose_extrapolation.py\" -O openpose_extrapolation.py\n","!wget \"https://user-images.githubusercontent.com/92468171/172700205-2fdcab9b-3820-477e-9c12-141762024c04.mp4\" -O oxford_openpose_raw_1.mp4"]},{"cell_type":"markdown","metadata":{},"source":["Before running the following cell you should modify `openpose_extrapolation.py` with the path to your openpose instalation folder."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["!python openpose_extrapolation.py oxford_openpose_raw_1.mp4 --skip-frame 5"]},{"cell_type":"markdown","metadata":{},"source":["### Display"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["!ffmpeg -i ./oxford_openpose_raw_1_out.mp4 -vcodec vp9 ./oxford_openpose_raw_1_out.webm"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["import io\n","from base64 import b64encode\n","from IPython.display import HTML\n","\n","with io.open('oxford_openpose_raw_1_out.webm','r+b') as f:\n"," mp4 = f.read()\n","data_url = \"data:video/webm;base64,\" + b64encode(mp4).decode()\n","HTML(\"\"\"\n","<video width=800 controls>\n"," <source src=\"%s\" type=\"video/webm\">\n","</video>\n","\"\"\" % data_url)"]}],"metadata":{"accelerator":"GPU","colab":{"collapsed_sections":[],"name":"install_openpose.ipynb","provenance":[]},"kernelspec":{"display_name":"Python 3","name":"python3"},"language_info":{"name":"python"}},"nbformat":4,"nbformat_minor":0}
158 changes: 102 additions & 56 deletions demos/openpose/openpose_extrapolation.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,81 +6,127 @@
import norfair
from norfair import Detection, Tracker, Video

# Insert the path to your openpose instalation folder here
openpose_install_path = "openpose/openpose"
frame_skip_period = 3
detection_threshold = 0.01
distance_threshold = 0.4
# Import openpose
openpose_install_path = (
"/openpose" # Insert the path to your openpose instalation folder here
)
try:
sys.path.append(openpose_install_path + "/build/python")
from openpose import pyopenpose as op
except ImportError as e:
print(
"Error: OpenPose library could not be found. Did you enable `BUILD_PYTHON` in CMake and have this Python script in the right folder?"
)
raise e


# Define constants
DETECTION_THRESHOLD = 0.01
DISTANCE_THRESHOLD = 0.4
INITIALIZATION_DELAY = 0
POINTWISE_HIT_COUNTER_MAX = 2

# Wrapper implementation for OpenPose detector
class OpenposeDetector:
def __init__(self):
def __init__(self, num_gpu_start=None):
# Set OpenPose flags
config = {}
config["dir"] = openpose_install_path
config["logging_level"] = 3
config["output_resolution"] = "-1x-1" # 320x176
config["net_resolution"] = "-1x768" # 320x176
config["model_folder"] = openpose_install_path + "/models/"
config["model_pose"] = "BODY_25"
config["logging_level"] = 3
config["output_resolution"] = "-1x-1"
config["net_resolution"] = "-1x768"
config["num_gpu"] = 1
config["alpha_pose"] = 0.6
config["scale_gap"] = 0.3
config["scale_number"] = 1
config["render_threshold"] = 0.05
config[
"num_gpu_start"
] = 0 # If GPU version is built, and multiple GPUs are available, set the ID here
config["scale_number"] = 1
config["scale_gap"] = 0.3
config["disable_blending"] = False
openpose_dir = config["dir"]
sys.path.append(openpose_dir + "/build/python/openpose")
from openpose import OpenPose # noqa

config["default_model_folder"] = openpose_dir + "/models/"
self.detector = OpenPose(config)
# If GPU version is built, and multiple GPUs are available,
# you can change the ID using the num_gpu_start parameter
if num_gpu_start is not None:
config["num_gpu_start"] = num_gpu_start

# Starting OpenPose
self.detector = op.WrapperPython()
self.detector.configure(config)
self.detector.start()

def __call__(self, image):
return self.detector.forward(image, False)
return self.detector.emplaceAndPop(image)


# Distance function
def keypoints_distance(detected_pose, tracked_pose):
distances = np.linalg.norm(detected_pose.points - tracked_pose.estimate, axis=1)
match_num = np.count_nonzero(
(distances < keypoint_dist_threshold)
* (detected_pose.scores > detection_threshold)
* (tracked_pose.last_detection.scores > detection_threshold)
(distances < KEYPOINT_DIST_THRESHOLD)
* (detected_pose.scores > DETECTION_THRESHOLD)
* (tracked_pose.last_detection.scores > DETECTION_THRESHOLD)
)
return 1 / (1 + match_num)


pose_detector = OpenposeDetector()
parser = argparse.ArgumentParser(description="Track human poses in a video.")
parser.add_argument("files", type=str, nargs="+", help="Video files to process")
args = parser.parse_args()
if __name__ == "__main__":

for input_path in args.files:
video = Video(input_path=input_path)
tracker = Tracker(
distance_function=keypoints_distance,
distance_threshold=distance_threshold,
detection_threshold=detection_threshold,
pointwise_hit_counter_max=2,
# CLI configuration
parser = argparse.ArgumentParser(description="Track human poses in a video.")
parser.add_argument("files", type=str, nargs="+", help="Video files to process")
parser.add_argument(
"--skip-frame", dest="skip_frame", type=int, default=1, help="Frame skip period"
)
parser.add_argument(
"--select-gpu",
dest="select_gpu",
help="Number of the gpu that you want to use",
default=None,
type=int,
)
keypoint_dist_threshold = video.input_height / 25

for i, frame in enumerate(video):
if i % frame_skip_period == 0:
detected_poses = pose_detector(frame)
detections = (
[]
if not detected_poses.any()
else [
Detection(p, scores=s)
for (p, s) in zip(detected_poses[:, :, :2], detected_poses[:, :, 2])
]
)
tracked_objects = tracker.update(
detections=detections, period=frame_skip_period
)
norfair.draw_points(frame, detections)
else:
tracked_objects = tracker.update()
norfair.draw_tracked_objects(frame, tracked_objects)
video.write(frame)
args = parser.parse_args()

# Process Videos
detector = OpenposeDetector(args.select_gpu)
datum = op.Datum()

for input_path in args.files:
print(f"Video: {input_path}")
video = Video(input_path=input_path)
tracker = Tracker(
distance_function=keypoints_distance,
distance_threshold=DISTANCE_THRESHOLD,
detection_threshold=DETECTION_THRESHOLD,
initialization_delay=INITIALIZATION_DELAY,
pointwise_hit_counter_max=POINTWISE_HIT_COUNTER_MAX,
)
KEYPOINT_DIST_THRESHOLD = video.input_height / 25

for i, frame in enumerate(video):
if i % args.skip_frame == 0:
datum.cvInputData = frame
detector(op.VectorDatum([datum]))
detected_poses = datum.poseKeypoints

if detected_poses is None:
tracked_objects = tracker.update(period=args.skip_frame)
continue

detections = (
[]
if not detected_poses.any()
else [
Detection(p, scores=s)
for (p, s) in zip(
detected_poses[:, :, :2], detected_poses[:, :, 2]
)
]
)
tracked_objects = tracker.update(
detections=detections, period=args.skip_frame
)
norfair.draw_points(frame, detections)
else:
tracked_objects = tracker.update(period=args.skip_frame)

norfair.draw_tracked_objects(frame, tracked_objects)
video.write(frame)

0 comments on commit d98926a

Please sign in to comment.