English | 简体中文
Implementation of Instance Segmentation based on YOLOv6 v4.0 code.
Model | Size | mAPbox 50-95 |
mAPmask 50-95 |
SpeedT4 trt fp16 b1 (fps) |
Params (M) |
FLOPs (G) |
---|---|---|---|---|---|---|
YOLOv6-N | 640 | 35.3 | 31.2 | 645 | 4.9 | 7.0 |
YOLOv6-S | 640 | 44.0 | 38.0 | 292 | 19.6 | 27.7 |
YOLOv6-M | 640 | 48.2 | 41.3 | 148 | 37.1 | 54.3 |
YOLOv6-L | 640 | 51.1 | 43.7 | 93 | 63.6 | 95.5 |
YOLOv6-X | 640 | 52.2 | 44.8 | 47 | 119.1 | 175.5 |
- All checkpoints are trained from scratch on COCO for 300 epochs without distillation.
- Results of the mAP and speed are evaluated on COCO val2017 dataset with the input resolution of 640×640.
- Speed is tested with TensorRT 8.5 on T4 without post-processing.
- Refer to Test speed tutorial to reproduce the speed results of YOLOv6.
- Params and FLOPs of YOLOv6 are estimated on deployed models.
Install
git clone https://github.com/meituan/YOLOv6
cd YOLOv6
git checkout yolov6-seg
pip install -r requirements.txt
Training
Single GPU
python tools/train.py --batch 8 --conf configs/yolov6s_seg_finetune.py --data data/coco.yaml --device 0
Multi GPUs (DDP mode recommended)
python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --batch 64 --conf configs/yolov6s_seg_finetune.py --data data/coco.yaml --device 0,1,2,3,4,5,6,7
- fuse_ab: Not supported in current version
- conf: select config file to specify network/optimizer/hyperparameters. We recommend to apply yolov6n/s/m/l_finetune.py when training on your custom dataset.
- data: prepare dataset and specify dataset paths in data.yaml ( COCO, YOLO format coco labels )
- make sure your dataset structure as follows:
├── coco
│ ├── annotations
│ │ ├── instances_train2017.json
│ │ └── instances_val2017.json
│ ├── images
│ │ ├── train2017
│ │ └── val2017
│ ├── labels
│ │ ├── train2017
│ │ ├── val2017
│ ├── LICENSE
│ ├── README.txt
Evaluation
Reproduce mAP on COCO val2017 dataset with 640×640 resolution
python tools/eval.py --data data/coco.yaml --batch 32 --weights yolov6s_seg.pt --task val
Inference
First, download a pretrained model from the YOLOv6 release or use your trained model to do inference.
Second, run inference with tools/infer.py
python tools/infer.py --weights yolov6s_seg.pt --source img.jpg / imgdir / video.mp4
If you want to inference on local camera or web camera, you can run:
python tools/infer.py --weights yolov6s_seg.pt --webcam --webcam-addr 0
webcam-addr
can be local camera number id or rtsp address.
Maybe you want to eval a solo-head model, remember to add the --issolo parameter.
Tutorials
Third-party resources
-
YOLOv6 Training with Amazon Sagemaker: yolov6-sagemaker from ashwincc
-
YOLOv6 NCNN Android app demo: ncnn-android-yolov6 from FeiGeChuanShu
-
YOLOv6 ONNXRuntime/MNN/TNN C++: YOLOv6-ORT, YOLOv6-MNN and YOLOv6-TNN from DefTruth
-
YOLOv6 TensorRT Python: yolov6-tensorrt-python from Linaom1214
-
YOLOv6 web demo on Huggingface Spaces with Gradio.
-
Interactive demo on DagsHub with Streamlit
-
Tutorial: How to train YOLOv6 on a custom dataset
-
YouTube Tutorial: How to train YOLOv6 on a custom dataset
-
Blog post: YOLOv6 Object Detection – Paper Explanation and Inference
If you have any questions, welcome to join our WeChat group to discuss and exchange.