Skip to content
This repository was archived by the owner on Dec 19, 2023. It is now read-only.

Commit 459f716

Browse files
authored
Merge branch 'master' into 1-pip-tensorlayer
2 parents 9129db3 + 475fd2d commit 459f716

26 files changed

+2290
-18
lines changed

.travis.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,7 @@ install:
6464
- |
6565
if [[ -v _DOC_AND_YAPF_TEST ]]; then
6666
pip install tensorflow==2.0.0-rc1
67+
pip install opencv-python
6768
pip install yapf
6869
pip install -e .[doc]
6970
else

docs/images/3d_human_pose_result.jpg

46.8 KB
Loading

docs/images/human_pose_points.jpg

26.3 KB
Loading

docs/images/yolov4_image_result.png

1.79 MB
Loading

docs/images/yolov4_video_result.gif

6.91 MB
Loading

docs/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Welcome to TensorLayer
99

1010
**Documentation Version:** |release|
1111

12-
**Jun 2020** `Deep Reinforcement Learning Book Is Coming <http://deepreinforcementlearningbook.org>`__.
12+
**Jun 2020** `Deep Reinforcement Learning Book Is Released <http://deepreinforcementlearningbook.org>`__.
1313

1414
**Good News:** We won the **Best Open Source Software Award** `@ACM Multimedia (MM) 2017 <http://www.acmmm.org/2017/mm-2017-awardees/>`_.
1515

docs/modules/visualize.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@ to visualize the model, activations etc. Here we provide more functions for data
1919
frame
2020
images2d
2121
tsne_embedding
22+
draw_boxes_and_labels_to_image_with_json
2223

2324

2425
Save and read images
@@ -44,6 +45,9 @@ Save image for object detection
4445
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4546
.. autofunction:: draw_boxes_and_labels_to_image
4647

48+
Save image for object detection with json
49+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
50+
.. autofunction:: draw_boxes_and_labels_to_image_with_json
4751

4852
Save image for pose estimation (MPII)
4953
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

examples/app_tutorials/README.md

Lines changed: 111 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,111 @@
1+
# Quick Start
2+
TensorLayer Implementation of [YOLOv4: Optimal Speed and Accuracy of Object Detection][1]
3+
4+
TensorLayer Implementation of [Optimizing Network Structure for 3D Human Pose Estimation][2](ICCV2019)
5+
6+
## YOLOv4
7+
8+
Yolov4 was trained on COCO 2017 Dataset in this demo.
9+
10+
### Data
11+
12+
Download yolov4.weights file [yolov4_model.npz][3], Password: `idsz`, and put yolov4.weights under the folder `./examples/app_tutorials/model/`. Your directory structure should look like this:
13+
14+
```
15+
${root}/examples
16+
└── app_tutorials
17+
└── model
18+
├── yolov4_model.npz
19+
├── coco.names
20+
└── yolov4_weights_congfig.txt
21+
22+
```
23+
24+
25+
You can put an image or a video under the folder `./examples/app_tutorials/data/`,like:
26+
```
27+
${root}/examples
28+
└──app_tutorials
29+
└──data
30+
└── *.jpg/*.png/*.mp4/..
31+
```
32+
### demo
33+
34+
1. Image
35+
36+
Modify `image_path` in `./examples/app_tutorials/tutorial_object_detection_yolov4_image.py` according to your demand, then
37+
38+
```bash
39+
python tutorial_object_detection_yolov4_image.py
40+
```
41+
2. Video
42+
43+
Modify `video_path` in `./examples/app_tutorials/tutorial_object_detection_yolov4_video.py` according to your demand, then
44+
45+
```bash
46+
python tutorial_object_detection_yolov4_video.py
47+
```
48+
3. Output
49+
50+
-Image
51+
52+
<p align="center"><img src="../../docs/images/yolov4_image_result.png" width="640"\></p>
53+
54+
-Video
55+
56+
<p align="center"><img src="../../docs/images/yolov4_video_result.gif" width="640"\></p>
57+
58+
59+
60+
## 3D Human Pose Estimation
61+
62+
### Data
63+
64+
Download 3D Human Pose Estimation model weights [lcn_model.npz][4], Password:`ec07`,and put it under the folder `./examples/app_tutorials/model/`, Your directory structure should look like this:
65+
66+
```
67+
${root}/examples
68+
└── app_tutorials
69+
└── model
70+
├── lcn_model.npz
71+
└── pose_weights_config.txt
72+
```
73+
Download finetuned Stacked Hourglass detections and preprocessed H3.6M data([H36M.rar][5],Password:`kw9i`), then uncompress and put them under the folder `./examples/app_tutorials/data/`, like:
74+
```
75+
${root}/examples
76+
└──app_tutorials
77+
└──data
78+
├── h36m_sh_dt_ft.pkl
79+
├── h36m_test.pkl
80+
└── h36m_train.pkl
81+
```
82+
Each sample is a list with the length of 34 in three `.pkl` files. The list represents `[x,y]` of 17 human pose points:
83+
<p align="center"><img src="../../docs/images/human_pose_points.jpg" width="300"\></p>
84+
85+
If you would like to know how to prepare the H3.6M data, please have a look at the [pose_lcn][6].
86+
87+
### Demo
88+
89+
For a quick demo, simply run
90+
91+
```bash
92+
python tutorial_human_3dpose_estimation_LCN.py
93+
```
94+
This will produce a visualization similar to this:
95+
<p align="center"><img src="../../docs/images/3d_human_pose_result.jpg" width="1500"\></p>
96+
97+
This demo maps 2D poses to 3D space. Each 3D space result list represents `[x,y,z]` of 17 human pose points.
98+
99+
# Acknowledgement
100+
101+
Yolov4 is bulit on https://github.com/AlexeyAB/darknet and https://github.com/hunglc007/tensorflow-yolov4-tflite.
102+
3D Human Pose Estimation is bulit on https://github.com/rujiewu/pose_lcn and https://github.com/una-dinosauria/3d-pose-baseline.
103+
We would like to thank the authors for publishing their code.
104+
105+
106+
[1]:https://arxiv.org/abs/2004.10934
107+
[2]:https://openaccess.thecvf.com/content_ICCV_2019/papers/Ci_Optimizing_Network_Structure_for_3D_Human_Pose_Estimation_ICCV_2019_paper.pdf
108+
[3]:https://pan.baidu.com/s/1MC1dmEwpxsdgHO1MZ8fYRQ
109+
[4]:https://pan.baidu.com/s/1HBHWsAfyAlNaavw0iyUmUQ
110+
[5]:https://pan.baidu.com/s/1nA96AgMsvs1sFqkTs7Dfaw
111+
[6]:https://github.com/rujiewu/pose_lcn
Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
person
2+
bicycle
3+
car
4+
motorbike
5+
aeroplane
6+
bus
7+
train
8+
truck
9+
boat
10+
traffic light
11+
fire hydrant
12+
stop sign
13+
parking meter
14+
bench
15+
bird
16+
cat
17+
dog
18+
horse
19+
sheep
20+
cow
21+
elephant
22+
bear
23+
zebra
24+
giraffe
25+
backpack
26+
umbrella
27+
handbag
28+
tie
29+
suitcase
30+
frisbee
31+
skis
32+
snowboard
33+
sports ball
34+
kite
35+
baseball bat
36+
baseball glove
37+
skateboard
38+
surfboard
39+
tennis racket
40+
bottle
41+
wine glass
42+
cup
43+
fork
44+
knife
45+
spoon
46+
bowl
47+
banana
48+
apple
49+
sandwich
50+
orange
51+
broccoli
52+
carrot
53+
hot dog
54+
pizza
55+
donut
56+
cake
57+
chair
58+
sofa
59+
potted plant
60+
bed
61+
dining table
62+
toilet
63+
tvmonitor
64+
laptop
65+
mouse
66+
remote
67+
keyboard
68+
cell phone
69+
microwave
70+
oven
71+
toaster
72+
sink
73+
refrigerator
74+
book
75+
clock
76+
vase
77+
scissors
78+
teddy bear
79+
hair drier
80+
toothbrush
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
linear_model/w1
2+
linear_model/b1
3+
linear_model/batch_normalization/beta
4+
linear_model/batch_normalization/gamma
5+
linear_model/batch_normalization/moving_mean
6+
linear_model/batch_normalization/moving_variance
7+
linear_model/two_linear_0/w2_0
8+
linear_model/two_linear_0/b2_0
9+
linear_model/two_linear_0/batch_normalization10/beta
10+
linear_model/two_linear_0/batch_normalization10/gamma
11+
linear_model/two_linear_0/batch_normalization10/moving_mean
12+
linear_model/two_linear_0/batch_normalization10/moving_variance
13+
linear_model/two_linear_0/w3_0
14+
linear_model/two_linear_0/b3_0
15+
linear_model/two_linear_0/batch_normalization20/beta
16+
linear_model/two_linear_0/batch_normalization20/gamma
17+
linear_model/two_linear_0/batch_normalization20/moving_mean
18+
linear_model/two_linear_0/batch_normalization20/moving_variance
19+
linear_model/two_linear_1/w2_1
20+
linear_model/two_linear_1/b2_1
21+
linear_model/two_linear_1/batch_normalization11/beta
22+
linear_model/two_linear_1/batch_normalization11/gamma
23+
linear_model/two_linear_1/batch_normalization11/moving_mean
24+
linear_model/two_linear_1/batch_normalization11/moving_variance
25+
linear_model/two_linear_1/w3_1
26+
linear_model/two_linear_1/b3_1
27+
linear_model/two_linear_1/batch_normalization21/beta
28+
linear_model/two_linear_1/batch_normalization21/gamma
29+
linear_model/two_linear_1/batch_normalization21/moving_mean
30+
linear_model/two_linear_1/batch_normalization21/moving_variance
31+
linear_model/two_linear_2/w2_2
32+
linear_model/two_linear_2/b2_2
33+
linear_model/two_linear_2/batch_normalization12/beta
34+
linear_model/two_linear_2/batch_normalization12/gamma
35+
linear_model/two_linear_2/batch_normalization12/moving_mean
36+
linear_model/two_linear_2/batch_normalization12/moving_variance
37+
linear_model/two_linear_2/w3_2
38+
linear_model/two_linear_2/b3_2
39+
linear_model/two_linear_2/batch_normalization22/beta
40+
linear_model/two_linear_2/batch_normalization22/gamma
41+
linear_model/two_linear_2/batch_normalization22/moving_mean
42+
linear_model/two_linear_2/batch_normalization22/moving_variance
43+
linear_model/w4
44+
linear_model/b4

0 commit comments

Comments
 (0)