-
-
Notifications
You must be signed in to change notification settings - Fork 16.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why the detect result of pytorch model is different from onnx model?It seems different deep learning framework has different output #6586
Comments
👋 Hello @zoubaihan, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com. RequirementsPython>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started: git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. |
@zoubaihan anecdotal results are not very meaningful. We have a benchmarking branch that validates and profiles exported models on a validation set. All export formats that are working here on CPU return near-identical mAPs: https://github.com/ultralytics/yolov5/blob/updates/benchmarks/utils/benchmarks.py
|
Hi @glenn-jocher , thanks for the information! I've just faced the same problem. Do you know what mechanism in onnx probably lower the mAP?(although it's not that much). I used to consider pt2onnx as a lossless conversion. |
According to my experiments, almost all deep learning model conversion tools on github will cause changes in model performance. I don't know why they insist their models all perform the same... There are also many people who have raised the similar issue on github, hoping to attract attention! Here is the collection of model converter: |
@zoubaihan not sure what's up with the thumbs down. Is benchmarking every single export format for mAP and speed a bad thing? |
@knwng the main difference I can think of is that PyTorch models are capable of rectangular inference while the export formats are at 640x640, so there are padding differences on the images that are passed to the models. Also note that benchmarks.py runs on COCO128 by default for speed. If you truly want to know COCO mAP you can simply run it with |
@zoubaihan @knwng good news 🙂!! I've confirmed that YOLOv5 produces identical exports. The only difference with PyTorch inference as I thought earlier was the rectangular inference, this was the cause of the mAP difference. I've updated benchmarks to force square inference for all formats and now mAP is identical. See PR #6613 for details. Colab++ High-RAM CPU Results
MacOS Intel CPU Results (CoreML-capable)
Ultralytics Hyperplane EPYC Milan AMD CPU Results
|
@glenn-jocher if we export a PyTorch model to CoreML, the exported model won't be able to run rectangular inference? Wouldn't that impact the performance a lot, like you showed here |
@abdullahabid10 all formats support export at any size and shape |
@glenn-jocher I'm referring to your comment ^ where you've said that only PyTorch models are capable of rectangular inference. Just want to know that if I export to CoreML, it won't be able to run rectangular inference? For example, if I export to CoreML with --imgsz parameter set to [320, 320], the model won't be able to take advantage of rectangular inference by letterboxing images to 320x192? |
@abdullahabid10 outdated comment. As I already mentioned above
|
Search before asking
Question
Hello, here is a question that confuse me a lot. I download the model yolov5m.onnx and yolov5m.pt , and the test command is as follow:
onnx model tested on COCO:
python detect.py --source /datasets/coco128/images/train2017 --weights /data/zoubaihan/pyproject/yolov5/yolov5m.onnx
One of the image detected by onnx model is as follow:
pytorch model tested on COCO:
python detect.py --source /datasets/coco128/images/train2017 --weights /data/zoubaihan/pyproject/yolov5/yolov5m.pt
While the same input image detected by pytorch model is as follow:
Jesus! Why the same model and the same weight under different deeplearning framework output different results?
I have also tried to convert some other deep learning models between different deep learning frameworks, but the output results are often different, or even very different. Could you please explain why this is?
I also tried to use export.py to convert model from pytorch to onnx, but their results is also not equal. The command is as follow:
python export.py --weights yolov5m.pt --include onnx
But another confusing problem came out: the file size of the output onnx model is not equal to the onnx model I download from your github. Is the yolov5m.onnx model you put in the release really converted by you by running export.py?
Additional
I tried the export.py and other convert tools, but all not work well. Their output results are not the same.
Anybody know why?
Here is the collection of model converter:
https://github.com/ysh329/deep-learning-model-convertor
The text was updated successfully, but these errors were encountered: