You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your great work. I find a question in loss function: the domain_s label is 1 and domain_t label is 0 in paper, but in the code domain label are 0 and 1. In addition,the loss function in paper is contradictory
The text was updated successfully, but these errors were encountered:
I think that the domain_s label is 1 and domain_t label is 0 the paper mentioned is for backbone (generator), because the two alignment networks(discriminator) connect to backbone through gradient reverse, for backbone, optimization direction is minimum, source domain label is 1 we want for backbone, but discriminator use reverse optimization direction to train, so use flipped label to train), same as GAN training.(if I understand adversarial learning correctly)
Thank you for your great work. I find a question in loss function: the domain_s label is 1 and domain_t label is 0 in paper, but in the code domain label are 0 and 1. In addition,the loss function in paper is contradictory
I read the paper yesterday and I think I have found the same issue. The formulation (8) and (9) in the paper are inverse, if the domain_s label is 1 and domain_t label is 0. I think the question is evident.
Thank you for your great work. I find a question in loss function: the domain_s label is 1 and domain_t label is 0 in paper, but in the code domain label are 0 and 1. In addition,the loss function in paper is contradictory
The text was updated successfully, but these errors were encountered: