reference paper : http://openaccess.thecvf.com/content_cvpr_2017/papers/Simon_Hand_Keypoint_Detection_CVPR_2017_paper.pdf
-
Dataset To train this network, I use MPII Hand keypoint dataset. Datset Information is here. In annotation file, there is 21 keypoints information. And we could get 20 limb data by using 21 keypoints.
-
Training
The upper graph is loss of confidence map, and the lower graph is loss of PAFs.
- Output I could get these output. First is image with representing keypoint.
And the second is image with representing PAFs with arrows.
The Confidencemap's size is [44x44x22]. And PAFs's size is [44x44x44]. So, I could get 22 confidencemaps and 44PAFs. Like this.
And, finally we could get these output
- etc Information about this repository.
-
Demo_Hand.ipynb : If you trained this network, you could get PAFs and Confidencemaps by running session. Then, you use this ipynb file to estimate actual pose by using bipartite argorithm.
-
For_Demo_Hand.ipynb : This file is for Demo_Hand.ipynb.
-
Hand_Data_Processing.ipynb : You download MPII Hand keypoint dataset, and you could get raw annotation data not good for using. So, I made this file for using annotation data easily.
-
Training_Hand.ipynb : This file is for training.