Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How was the neg_sqr_dist formulated? Could you point out the equation in the paper? #18

Open
kalyanainala opened this issue Jul 20, 2020 · 1 comment

Comments

@kalyanainala
Copy link

In the tensorflow folder, the python code resnet_model.py has the following lines
"
XY = tf.matmul(feat, means, transpose_b=True)
XX = tf.reduce_sum(tf.square(feat), axis=1, keep_dims=True)
YY = tf.reduce_sum(tf.square(tf.transpose(means)), axis=0, keep_dims=True)
neg_sqr_dist = -0.5 * (XX - 2.0 * XY + YY)
"
I have read the paper but couldn't understand how the neg_sqr_dist = -0.5 * (XX - 2.0 * XY + YY) equation was formed. Could you point out the equation in the paper which gave -0.5 * (XX - 2.0 * XY + YY) ?

@kalyanainala
Copy link
Author

kalyanainala commented Jul 22, 2020

I believe it is the equation 18 that gave the formula for neg_sqr_dist. ((a-b)2 = (a2 - 2ab + b2)) where a, b are the xi and mean respectively.
equation_18
If I am right, Why was the covariance matrix not considered in the equation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant