You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
Hi everyone,
We’ve encountered a noticeable discrepancy in the performance metrics when training the same model (yolov8n.pt) on the same dataset but with different hardware and similar training parameters. The results, specifically the mAP (50-95), vary significantly across different setups.
Base Model: yolov8n.pt
Training Parameters:
No.
Hardware
Epochs
Batch Size
mAP (50-95)
1
A6000(48GB vram)
100
16
0.961
2
4090(24GB vram)
100
12
0.93
3
4090(24GB vram)
150
12
0.92
4
L20(48GB vram)
100
16
0.976
We’ve also tried enabling or disabling coslr, but it seems to have little to no effect on the outcome.
Could anyone shed light on what might be causing this inconsistency? Additionally, what strategies could we adopt to achieve better performance on more limited hardware setups?
Thank you in advance for your help!
Additional
No response
The text was updated successfully, but these errors were encountered:
👋 Hello @timiil, thank you for bringing this to our attention! 🚀 This is an automated response to help guide you, and one of our Ultralytics engineers will assist you soon.
Since you are experiencing variations in results, could you provide a minimum reproducible example to help us better understand and debug the issue? A consistent setup between different hardware is crucial, and there might be nuances that the example could highlight.
@timiil variations in training results can often be attributed to differences in hardware architecture, which may affect computation precision and optimization. To mitigate these discrepancies, ensure consistent software environments across setups, including CUDA and PyTorch versions. Additionally, consider using mixed precision training with --amp to optimize performance on limited hardware.
Search before asking
Question
Hi everyone,
We’ve encountered a noticeable discrepancy in the performance metrics when training the same model (yolov8n.pt) on the same dataset but with different hardware and similar training parameters. The results, specifically the mAP (50-95), vary significantly across different setups.
Base Model: yolov8n.pt
Training Parameters:
We’ve also tried enabling or disabling
coslr
, but it seems to have little to no effect on the outcome.Could anyone shed light on what might be causing this inconsistency? Additionally, what strategies could we adopt to achieve better performance on more limited hardware setups?
Thank you in advance for your help!
Additional
No response
The text was updated successfully, but these errors were encountered: