Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

domain label are different from paper #39

Open
jhzhang19 opened this issue Jul 23, 2021 · 2 comments
Open

domain label are different from paper #39

jhzhang19 opened this issue Jul 23, 2021 · 2 comments

Comments

@jhzhang19
Copy link

Thank you for your great work. I find a question in loss function: the domain_s label is 1 and domain_t label is 0 in paper, but in the code domain label are 0 and 1. In addition,the loss function in paper is contradictory

@Shuntw6096
Copy link

Shuntw6096 commented Sep 9, 2021

I think that the domain_s label is 1 and domain_t label is 0 the paper mentioned is for backbone (generator), because the two alignment networks(discriminator) connect to backbone through gradient reverse, for backbone, optimization direction is minimum, source domain label is 1 we want for backbone, but discriminator use reverse optimization direction to train, so use flipped label to train), same as GAN training.(if I understand adversarial learning correctly)

@a21401624
Copy link

Thank you for your great work. I find a question in loss function: the domain_s label is 1 and domain_t label is 0 in paper, but in the code domain label are 0 and 1. In addition,the loss function in paper is contradictory

I read the paper yesterday and I think I have found the same issue. The formulation (8) and (9) in the paper are inverse, if the domain_s label is 1 and domain_t label is 0. I think the question is evident.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants