You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One question about LPIPS loss, in paper your said:
To capture fine details and further improve the realism,
we follow the Learned Perceptual Image Patch Similarity
(LPIPS) loss in [Zhang et al., 2018] and adversarial objective
in [Choi et al., 2020].
My understanding is LPIPS is image difference between 2 image, for instance
LPIPS(x,y)
In the loss function, this LPIPS is the difference between which 2 images?
Thanks.
The text was updated successfully, but these errors were encountered:
The cycle loss is a supplement of pixel supervision and can help generate high-fidelity results: L cyc = ||I t − G(I r , I t )|| 1 ,
Here you consider G(I_r, I_t) should be the same as I_t.
My question is, I_r has I_s face and I_t attributes, I_t of course has target face and attribute, then G(I_r, I_t) should have source face and target attribute, which should be similar to I_r, not I_t.
Should it be G(I_t, I_r), this will get an image of target face and target attribute, the result should be same as target I_t.
First thanks for the great ideas in the paper.
One question about LPIPS loss, in paper your said:
To capture fine details and further improve the realism,
we follow the Learned Perceptual Image Patch Similarity
(LPIPS) loss in [Zhang et al., 2018] and adversarial objective
in [Choi et al., 2020].
My understanding is LPIPS is image difference between 2 image, for instance
LPIPS(x,y)
In the loss function, this LPIPS is the difference between which 2 images?
Thanks.
The text was updated successfully, but these errors were encountered: