Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XCov loss layer #2126

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
Open

XCov loss layer #2126

wants to merge 9 commits into from

Conversation

briancheung
Copy link

XCov Loss Layer based on our recent paper:
http://arxiv.org/abs/1412.6583

Coded with:
@jackculpepper @JesseLivezey

@koth
Copy link

koth commented Mar 18, 2015

Can we have a mnist example use this loss layer to train

@JesseLivezey
Copy link

@koth, that's on our TODO list.

@koth
Copy link

koth commented Mar 20, 2015

@JesseLivezey cool, thanks!

@briancheung
Copy link
Author

We added MNIST example for training with the XCov cost layer. Generates the transformations described in the paper (sample image below, varying z variables)

index

@sunbaigui
Copy link

mark

@jyegerlehner
Copy link
Contributor

@briancheung nice work; thanks for sharing.
One comment: the paper says that the implementation uses pylearn2. But there's no Caffe citation, unless I missed it.

@briancheung
Copy link
Author

The original implementation of this was in Pylearn2 (the very first one was actually in numpy). That library was used to generate the results for the paper on arXiv. Our codebase for that implementation isn't as clean as this Caffe impelementation. Pylearn2 has to be modified quite a bit if you're doing anything besides supervised classification. We decided to release the Caffe version as it doesn't break any of the underlying Caffe design.

@shuzhangcasia
Copy link

@briancheung Thanks for sharing the code. And I have one question with your paper. In your experiment on Multi-PIE, if you use the identity as the observed variable instead, will your network be able to learn the latent variations, such as pose and illuminations?

@briancheung
Copy link
Author

@shuzhangcasia We haven't tried this experiment, but my intuition would be yes. But if you only use identity as your supervised variable, I would expect the other latent variations such as pose and illumination to remained 'entangled' with each other albeit disentangled with variations in identity.

@shuzhangcasia
Copy link

@briancheung Thanks for the comment, I would expect the same. One more thing, would you please sent me a copy of your supplementary material for this paper? I couldn't find it anywhere.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants