Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unfair comparison #7

Closed
xiaobiaodu opened this issue Jul 29, 2022 · 2 comments
Closed

Unfair comparison #7

xiaobiaodu opened this issue Jul 29, 2022 · 2 comments

Comments

@xiaobiaodu
Copy link

Your IAT is trained on 689 LOL training images.
But the original LOL dataset just contains 485 training images.
Table 1 shows the results of the LOL dataset. Other methods, like MAXIM, are trained on the original dataset.
More training data means better results. It seems the author extends external training data for better results.
The author should give a reasonable explanation for why conducts so unfair comparison.

@xiaobiaodu xiaobiaodu changed the title Unfar comparison Unfair comparison Jul 29, 2022
@cuiziteng
Copy link
Owner

cuiziteng commented Jul 31, 2022

Really thanks for your attention, we have renew our Arxiv version, and the renewed results on LOL-V1 results is 23.38 and 0.809. The new Arxiv version will be update this week. I'll release the training code and test code on LOL-V1 dataset as soon as possible. Any question else please drop here~

image

@xiaobiaodu
Copy link
Author

Good job. Thanks for your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants