-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A question about CAM attention map #5
Comments
We conducted extensive experiments on the location (before Enhance and Degrade modules) and number (whether Enhance module and Degrade module share the same attention map or not) of attention map IA, and found that the design we adopt in the paper performs best. We attribute it to two factors.
|
I mean you can use the same ME. But given different image (unpaired like IL, IH), the attention map should be generated by the the same ME while using different input. |
In this paper, ME is trained to depict low-light distribution of the degraded input. If inputting a clean image and degrade it following the guidance of ME, ME needs to be able to imagine possible dark regions based on normal-light images, which also increase training difficulty. At least our experiments show that this setting performs a little weaker than the seeting we final adopted. |
i understand now. Thank you for your time! |
In this picture, why IL and IH share the same attention map generated by Mask extractor when IL and IH are unpaired.
The text was updated successfully, but these errors were encountered: