You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Anyone knows about the validation or test accuracy of this model?
I implement the paper using keras and use 10% images of each category as validation data, it turns out the model performs outstanding on training data while the accuracy on validation data fluctuates wildly, from 10%~ 99%, quite strange.
The paper only mentions on accuracy on all images, this can be similar to training accuracy, but the high training accuracy can be a result of overfit.
The text was updated successfully, but these errors were encountered:
Hi, a bit late answer, but for whoever might read this: If you take 10% of the images you have the same patients in train and test and the performance estimation is not valid.
Anyone knows about the validation or test accuracy of this model?
I implement the paper using keras and use 10% images of each category as validation data, it turns out the model performs outstanding on training data while the accuracy on validation data fluctuates wildly, from 10%~ 99%, quite strange.
The paper only mentions on accuracy on all images, this can be similar to training accuracy, but the high training accuracy can be a result of overfit.
The text was updated successfully, but these errors were encountered: