Darknet YOLO architectures implemented in Tensorflow and Tensorflow Lite.
- In the first place You need to have Darknet YOLOv3 or v4 weights to work with. Weights might be either custom trained or pre-trained on benchmark COCO dataset. To download pre-trained
yolov4.weights
click here. - Except weights,
.names
file is required for model to have class labels reference. For benchmark COCO dataset, filecoco.names
is already available here.
git clone https://github.com/patryklaskowski/Convert_Darknet_YOLO_to_TensorFlow.git && \
cd Convert_Darknet_YOLO_to_TensorFlow && \
python3.7 -m venv env && \
source env/bin/activate && \
python3.7 -m pip install -U pip && \
python3.7 -m pip install -r requirements.txt
Darknet YOLOv4 weights to download.
yolov4.weights (COCO dataset) |
yolov4_licence_plate.weights |
Download | Download |
My .weights
file is here: ./data/yolov4_licence_plate.weights
.names
file contains all class labels for specific YOLO weights where each line represents one class name.
.names
file for domainyolov4.weights
is already prepared on path./data/classes/coco.names
.coco.names
has 80 rows -> each one corresponds to single label.
coco.names | licence_plate.names |
Show on path | Download |
My .names
file is here: ./data/classes/licence_plate.names
File is here: ./core/config.py
. Edit only __C.YOLO.CLASSES
value to be path that points prepared .names
file.
By default
__C.YOLO.CLASSES
points to./data/classes/coco.names
file.
Therefore if you use domaincoco.names
there is no need to change.
According to my .names
file: __C.YOLO.CLASSES = "./data/classes/licence_plate.names"
So far we have:
-
.weights
on path./data/yolov4_licence_plate.weights
-
.names
on path./data/classes/licence_plate.names
- adjusted
__C.YOLO.CLASSES
param inside 'config.py' on path./core/config.py
You have environment prepared to perform conversion.
save_model.py
does the job.
Required flags:
--weights
: path to weights./data/yolov4_licence_plate.weights
--output
: where to save output./checkpoints/license_plate-416
--input_size
: size of YOYLO input data416
(px)--model
: one of ['yolov3', yolov3]yolov4
python3.7 save_model.py --weights ./data/yolov4_licence_plate.weights --output ./checkpoints/license_plate-416 --input_size 416 --model yolov4
This creates new folder ./checkpoints/license_plate-416
that stores saved_model.pb
- actual Tensorflow model.
This option is lightweight. This solution trade off speed over accuracy.
Great for edge devices such as mobile phones, raspberry pi and others.
python3.7 save_model.py --weights ./data/yolov4_licence_plate.weights --output ./checkpoints/license_plate-416 --input_size 416 --model yolov4 --framework tflite
python3.7 convert_tflite.py --weights ./checkpoints/license_plate-416 --output ./checkpoints/yolov4_license_plate-416.tflite
Difference makes --framework tflite
flag.
This creates new light weight Tensorflow object ./checkpoints/yolov4_license_plate-416.tflite
- actual Tensorflow Lite model.
python3.7 detect.py --weights ./checkpoints/license_plate-416 --size 416 --model yolov4 --images ./data/images/license_plate.jpg
To run multiple image detection, change flag
--images
using following pattern--images './path/to/image1.jpg, ./path/to/image2.jpg, ./another/path/image.jpg'
python3.7 detect_video.py --weights ./checkpoints/license_plate-416 --size 416 --model yolov4 --video ./data/video/road.mp4 --output ./detections/results.avi
To run predictions from webcam set
--video
flag argument to0
as follows--video 0
.
python3.7 detect.py --weights ./checkpoints/yolov4_license_plate-416.tflite --size 416 --model yolov4 --images ./data/images/license_plate.jpg --framework tflite
python3.7 detect_video.py --weights ./checkpoints/yolov4_license_plate-416.tflite --size 416 --model yolov4 --video ./data/video/road.mp4 --output ./detections/results.avi --framework tflite
save_model.py:
--weights: path to weights file
(default: './data/yolov4.weights')
--output: path to output
(default: './checkpoints/yolov4-416')
--[no]tiny: yolov4 or yolov4-tiny
(default: 'False')
--input_size: define input size of export model
(default: 416)
--framework: what framework to use (tf, trt, tflite)
(default: tf)
--model: yolov3 or yolov4
(default: yolov4)
detect.py:
--images: path to input images as a string with images separated by ","
(default: './data/images/kite.jpg')
--output: path to output folder
(default: './detections/')
--[no]tiny: yolov4 or yolov4-tiny
(default: 'False')
--weights: path to weights file
(default: './checkpoints/yolov4-416')
--framework: what framework to use (tf, trt, tflite)
(default: tf)
--model: yolov3 or yolov4
(default: yolov4)
--size: resize images to
(default: 416)
--iou: iou threshold
(default: 0.45)
--score: confidence threshold
(default: 0.25)
detect_video.py:
--video: path to input video (use 0 for webcam)
(default: './data/video/video.mp4')
--output: path to output video (remember to set right codec for given format. e.g. XVID for .avi)
(default: None)
--output_format: codec used in VideoWriter when saving video to file
(default: 'XVID)
--[no]tiny: yolov4 or yolov4-tiny
(default: 'false')
--weights: path to weights file
(default: './checkpoints/yolov4-416')
--framework: what framework to use (tf, trt, tflite)
(default: tf)
--model: yolov3 or yolov4
(default: yolov4)
--size: resize images to
(default: 416)
--iou: iou threshold
(default: 0.45)
--score: confidence threshold
(default: 0.25)