Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do we need to scale word embeddings to [-1, 1]? #49

Open
tj-zhu opened this issue Nov 19, 2022 · 3 comments
Open

Do we need to scale word embeddings to [-1, 1]? #49

tj-zhu opened this issue Nov 19, 2022 · 3 comments

Comments

@tj-zhu
Copy link

tj-zhu commented Nov 19, 2022

Hi there, thank you very much for providing the code!

I am new to diffusion model, so I apologize in advance if I ask a dumb question.

In this line, it seems we are getting word embeddings and adding noise directly to it, without making sure word embeddings are between [-1, 1].

In DDPM, we need to scale image to [-1, 1] for parameters in noise scheduler to work properly.

I am wondering how we control the scale in text.

Thank you very much!

@XiangLi1999
Copy link
Owner

Hi,

Thanks for the question. We are not mapping the word embeddings to be between [-1, 1], and this is different from image diffusions.

There are three terms in the objective: (1) Lsimple (mse), (2) the reconstruction (i.e. decoder_nll) (3) the prior (t_T_loss) as in

terms["loss"] = terms["mse"] + (decoder_nll + tT_loss)
. Term (2) prevents norm from being too small, term (3) prevents the norm from being too large.

Hope this helps!

@tj-zhu
Copy link
Author

tj-zhu commented Nov 19, 2022

Yes this explains it! Thank you very much for the quick response and the great explanation!

@tj-zhu tj-zhu closed this as completed Nov 19, 2022
@tj-zhu tj-zhu reopened this Nov 20, 2022
@tj-zhu
Copy link
Author

tj-zhu commented Nov 20, 2022

Hi @XiangLi1999, I am sorry for reopening the issue. I just have one more question about the loss function.

Can I ask why in decoder_nll loss, we input x_start instead of the predicted x_start?
You mentioned the decoder_nll is to prevent word embeddings being too small. I assume it's because if the word embeddings are too small, the noise will take dominance, and it will be difficult for model to denoise, then the reconstruction loss will be high? Please correct me if I am wrong.

If that's the purpose of this reconstruction loss, then we need to use the predicted x_start (the denoised version) to calculate reconstruction loss, right?

Sorry if the answer seems obvious but I didn't get it. Thank you very much for your help!

decoder_nll = self.token_discrete_loss(x_start, get_logits, input_ids)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants