-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
XCov loss layer #2126
base: master
Are you sure you want to change the base?
XCov loss layer #2126
Conversation
Can we have a mnist example use this loss layer to train |
@koth, that's on our TODO list. |
@JesseLivezey cool, thanks! |
mark |
@briancheung nice work; thanks for sharing. |
The original implementation of this was in Pylearn2 (the very first one was actually in numpy). That library was used to generate the results for the paper on arXiv. Our codebase for that implementation isn't as clean as this Caffe impelementation. Pylearn2 has to be modified quite a bit if you're doing anything besides supervised classification. We decided to release the Caffe version as it doesn't break any of the underlying Caffe design. |
@briancheung Thanks for sharing the code. And I have one question with your paper. In your experiment on Multi-PIE, if you use the identity as the observed variable instead, will your network be able to learn the latent variations, such as pose and illuminations? |
@shuzhangcasia We haven't tried this experiment, but my intuition would be yes. But if you only use identity as your supervised variable, I would expect the other latent variations such as pose and illumination to remained 'entangled' with each other albeit disentangled with variations in identity. |
@briancheung Thanks for the comment, I would expect the same. One more thing, would you please sent me a copy of your supplementary material for this paper? I couldn't find it anywhere. |
XCov Loss Layer based on our recent paper:
http://arxiv.org/abs/1412.6583
Coded with:
@jackculpepper @JesseLivezey