YOLOv5n Nano Study #5027
Replies: 14 comments 18 replies
-
Sir, if you have tested the speed of yolov5n on mobile devices? It seems that the Apple m1 chip will speed up the model inference, but how about the performance of yolov5n for ordinary mobile devices, like Android? By the way, good job!Sir!I like it! |
Beta Was this translation helpful? Give feedback.
-
UDPATE: The result of the above study is that the YOLOv5n2 model was selected as the best speed-mAP compromise candidate of the four experimental nano models. So the v6.0 release YOLOv5n model is the YOLOv5n2 model from this study. This model was also the simplest to understand as we only apply a reduction in width scaling from YOLOv5s (0.5) to YOLOv5n (0.25), which is in line with the scaling of the entire family (0.25 changes between each size). |
Beta Was this translation helpful? Give feedback.
-
the nano model of yolov5-6.0 can be 30fps in 4G Nano? img is 6406403 |
Beta Was this translation helpful? Give feedback.
-
tks for reply. I mean the fps of the yolov5-6.0 on NVIDIA JETSON NANO?
|
Beta Was this translation helpful? Give feedback.
-
halo, since the focus on mobile use, is there any example of your implementation to the android device? |
Beta Was this translation helpful? Give feedback.
-
I have a custom model trained with yolov5l that I'd like to test inference on Android, is there a way to convert the trained model from yolov5l.pt to yolov5s.pt or other nano models? Or do I need to train for the smaller weight again? |
Beta Was this translation helpful? Give feedback.
-
Is it possible to convert a trained yolov5s to yolov5n? |
Beta Was this translation helpful? Give feedback.
-
Why are these models not run as .engine or .onnx files? |
Beta Was this translation helpful? Give feedback.
-
can you share the config file of the yolov5n1、yolov5n2、yolov5n3、yolov5n4,the website(https://github.com/ultralytics/yolov5/tree/tests/v6.0.) is disabled |
Beta Was this translation helpful? Give feedback.
-
Hello,sir.Can you tell me how to compute the size of model? The params number of v5n is 1.9M but why the size is about 4MB? |
Beta Was this translation helpful? Give feedback.
-
hi,sir!I get into trouble again. After training,it generates a results.txt. There are lr0、lr1 and lr2 in results.txt. I guess lr is learning rate, but I don't know why it has three lr. Thank you sincerely! |
Beta Was this translation helpful? Give feedback.
-
Is it possible to get the architecture of yolov5n and pytorch implementation architecture from scratch |
Beta Was this translation helpful? Give feedback.
-
Thank you so much for the quick reply Glenn Jocher and the support. Thanks
again
…On Mon, Jul 11, 2022 at 3:10 AM Glenn Jocher ***@***.***> wrote:
@jjjonathan14 <https://github.com/jjjonathan14> 👋 Hello! Thanks for
asking about YOLOv5 🚀 *architecture visualization*. We've made
visualizing YOLOv5 🚀 architectures super easy. There are 3 main ways:
model.yaml
Each model has a corresponding yaml file that displays the model
architecture. Here is YOLOv5s, defined by yolov5s.yaml:
https://github.com/ultralytics/yolov5/blob/1a3ecb8b386115fd22129eaf0760157b161efac7/models/yolov5s.yaml#L12-L48
TensorBoard Graph
Simply start training a model, and then view the TensorBoard Graph for an
interactive view of the model architecture. This example shows YOLOv5s
viewed in our Notebook
<https://github.com/ultralytics/yolov5/blob/master/tutorial.ipynb> – [image:
Open In Colab]
<https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb> [image:
Open In Kaggle] <https://www.kaggle.com/ultralytics/yolov5>
# Tensorboard
%load_ext tensorboard
%tensorboard --logdir runs/train
# Train YOLOv5s on COCO128 for 3 epochs
python train.py --weights yolov5s.pt --epochs 3
[image: Screenshot 2021-04-11 at 01 10 09]
<https://user-images.githubusercontent.com/26833433/114286928-349bd600-9a63-11eb-941f-7139ee6cd602.png>
Netron viewer
Use https://netron.app to view exported ONNX models:
python export.py --weights yolov5s.pt --include onnx --simplify
[image: Screen Shot 2022-04-29 at 11 09 23 AM]
<https://user-images.githubusercontent.com/26833433/165999628-0d6095ce-58dd-49b7-a04e-25d0995e5493.png>
Good luck 🍀 and let us know if you have any other questions!
—
Reply to this email directly, view it on GitHub
<#5027 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AZAGX3VYTVSIV3V4LDNTFSLVTM7NLANCNFSM5FFWHEJA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Starting this discussion to track smaller model experiments as several users have expressed interest. Experiments in https://github.com/ultralytics/yolov5/tree/tests/v6.0. Note: only inference speeds shown, NMS not included in charts below.
Train
Val: V100
--batch 32
(pixels)
0.5:0.95
0.5:0.95
0.5
V100 (ms)
(M)
640 (B)
Val: V100
--batch 1
(pixels)
0.5:0.95
0.5:0.95
0.5
V100 (ms)
(M)
640 (B)
Val: V100
--batch 1 --device cpu
(pixels)
0.5:0.95
0.5:0.95
0.5
CPU (ms)
(M)
640 (B)
Beta Was this translation helpful? Give feedback.
All reactions