-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent evaluation results #2594
Comments
I also encountered this problem and later found out that it was the different way of calculating IoU. The mmseg is calculated for the whole dataset, not the average of the sum of iou. |
If there are 1000 images, 600 are used for training, 100 for validation, and 300 for testing, what does "for the whole dataset" mean here? Does it mean dividing by the total number of images, not the number of images in each set? I can't understand. Could you please explain it in detail? Thank you. And could you please share a suitable solution? |
|
But before calculating the metrics, the pictures will be resized into the same size (refer to the code in config files: img_scale=(2048, 512),). in this situation, two ways you mentioned above will lead to the same results. I don't understand. |
In fact, the two approaches yield different results, or we are not talking about the same issue |
When the size of the dataset image is the same, the calculated results of the two methods you mentioned should be the same. Because the code part has the relevant code to resize the image size, I don't think that is the reason for the different results. |
@xiexinch 请问可以帮我看一下问题出在哪里了吗,还是有点懵 |
@xiexinch @Rowan-L 我好像发现了问题,resize后跟了一个参数keep_ratio=true,如果为true,前面resize的img_scale不为最终resize的大小,而是一个最大最小区间,参见https://zhuanlan.zhihu.com/p/381117525 ,但有一个问题,如果是这样的话,测试结果是否正确,如果正确,如何得到resize过的预测图片,而不是原图大小? |
keep_ratio=True just make the length-to-width ratio is same as before resize. The prediction of model will be resized as the original size of image. If you use the original image to test, just modify test pipeline as
|
Through the same config file, regardless of the keep_ratio=True operation, the result of the evaluation using tools/test.py should be the same as the result of iou test between the predicted images and ground truth, but why is it not the same? |
Would you like to tell me specific different mIoU? |
The iou result obtained using the test.py file is 0.7593. The result obtained by predicting the image by the model, saving the image (in png format) and comparing the ground truth is 0.6960. There should be no randomness in the model, as the same result is obtained several times using the test.py file. |
I use "tools/test.py --eval" to test the test set of the results, and I also save the predicted pictures after the test, then they are compared with the ground truth to get the iou, fscore result. Two result is not consistent, if my config file on the test set is wrong, which due to the different result, the config file is as follows.
The text was updated successfully, but these errors were encountered: