Skip to content

Commit

Permalink
update tracking
Browse files Browse the repository at this point in the history
  • Loading branch information
Fang-Haoshu committed Aug 16, 2020
1 parent 61aa99a commit 0ad7db0
Show file tree
Hide file tree
Showing 18 changed files with 29 additions and 10 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ More results and models are available in the [docs/MODEL_ZOO.md](docs/MODEL_ZOO.
<img src="docs/posetrack2.gif", width="344">
</p>

Please read [PoseFlow/README.md](PoseFlow/) for details.
Please read [trackers/README.md](trackers/) for details.

### CrowdPose
<p align='center'>
Expand Down
4 changes: 2 additions & 2 deletions alphapose/utils/vis.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ def vis_frame_fast(frame, im_res, opt, format='coco'):
if 'box' in human.keys():
bbox = human['box']
else:
from PoseFlow.poseflow_infer import get_box
from trackers.PoseFlow.poseflow_infer import get_box
keypoints = []
for n in range(kp_scores.shape[0]):
keypoints.append(float(kp_preds[n, 0]))
Expand Down Expand Up @@ -284,7 +284,7 @@ def vis_frame(frame, im_res, opt, format='coco'):
bbox = human['box']
bbox = [bbox[0], bbox[0]+bbox[2], bbox[1], bbox[1]+bbox[3]]#xmin,xmax,ymin,ymax
else:
from PoseFlow.poseflow_infer import get_box
from trackers.PoseFlow.poseflow_infer import get_box
keypoints = []
for n in range(kp_scores.shape[0]):
keypoints.append(float(kp_preds[n, 0]))
Expand Down
2 changes: 1 addition & 1 deletion alphapose/utils/writer.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ def __init__(self, cfg, opt, save_video=False,
os.mkdir(opt.outputpath + '/vis')

if opt.pose_flow:
from PoseFlow.poseflow_infer import PoseFlowWrapper
from trackers.PoseFlow.poseflow_infer import PoseFlowWrapper
self.pose_flow_wrapper = PoseFlowWrapper(save_path=os.path.join(opt.outputpath, 'poseflow'))

def start_worker(self, target):
Expand Down
5 changes: 4 additions & 1 deletion docs/run.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,14 @@ Here, we first list the flags and other parameters you can tune. Default paramet
- `--min_box_area`: Min box area to filter out, you can set it like 100 to filter out small people.
- `--gpus`: Choose which cuda device to use by index and input comma to use multi gpus, e.g. 0,1,2,3. (input -1 for cpu only)

- `--pose_track`: Enable tracking pipeline with human re-id feature, it is currently the best performance pose tracker
- `--pose_flow`: This flag will be depreciated. It enables the old tracking version of PoseFlow.

All the flags available here: [link](../scripts/demo_inference.py#L22)


## Parameters
1. yolo detector config is [here](../detector/yolo_cfg.py)
- `CONFIDENCE`: Confidence threshold for human detection. Lower the value can improve the final accuracy but decrease the speed. Default is 0.05.
- `NMS_THRES`: NMS threshold for human detection. Increase the value can improve the final accuracy but decrease the speed. Default is 0.6.
- `INP_DIM`: The input size of detection network. The inp_dim should be multiple of 32. Default is 608. Increase it may improve the accuracy.
- `INP_DIM`: The input size of detection network. The inp_dim should be multiple of 32. Default is 608. Increase it may improve the accuracy.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes
File renamed without changes
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ def load_pose_boxes(img_name):
tasks.append((img1_path,img2_path, image_dir, frame_id, next_frame_id))

# do the matching parallel
parallel_process(tasks, orb_matching, n_jobs=16)
parallel_process(tasks, orb_matching, n_jobs=8)

print("Start pose tracking...\n")
# tracking process
Expand Down
File renamed without changes.
24 changes: 20 additions & 4 deletions trackers/README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,34 @@
# Pose Tracking
## Models
# Pose Tracking Module for AlphaPose


## Human-ReID based tracking (Recommended)
Currently the best performance tracking model. Paper coming soon.

### Getting started
Download [human reid model](https://mega.nz/#!YTZFnSJY!wlbo_5oa2TpDAGyWCTKTX1hh4d6DvJhh_RUA2z6i_so) and place it into `./trackers/weights/`.

Then simply run alphapose with additional flag `--pose_track`

You can try different person reid model by modifing `cfg.arch` and `cfg.loadmodel` in `./trackers/tracker_cfg.py`.

If you want to train your own reid model, please refer to this [project](https://github.com/KaiyangZhou/deep-person-reid)
## Demo

### Demo
``` bash
./scripts/inference.sh ${CONFIG} ${CHECKPOINT} ${VIDEO_NAME} ${OUTPUT_DIR}, --pose_track
```
## Todo
### Todo
- [] Evaluation Tools for PoseTrack
- [] More Models
- [] Training code for [PoseTrack Dataset](https://posetrack.net/)


## PoseFlow human tracking
This tracker is based on our BMVC 2018 paper PoseFlow.

### Getting started

Simply run alphapose with additional flag `--pose_flow`

### More info
For more info, please refer to [PoseFlow/README.md](PoseFlow/)

0 comments on commit 0ad7db0

Please sign in to comment.