Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Learned representations after pretext task #130

Open
emessori opened this issue Dec 6, 2022 · 2 comments
Open

Learned representations after pretext task #130

emessori opened this issue Dec 6, 2022 · 2 comments

Comments

@emessori
Copy link

emessori commented Dec 6, 2022

Hi @wvangansbeke.
First of all, thank you for your work!
I managed to run your program with a custom dataset: however, despite obtaining nearest neighbors that at a naked eye seem 'good', the clustering step is not yielding satisfying results at the moment. I am currently running this step with different hyperparameters that may be more appropriate for my dataset, but I was wondering whether there was a way to retrieve the representation learned through the pretext task; I supposed that I should focus on output = model(input__).view(b, 2, -1) here but I am not sure about it. Could you please provide some clues or insights about it?

P.s. I checked issue #53 but it was not helpful for my task.

@akshay-iyer
Copy link

You could save the features in the mine_nearest_neighbors function. Those are your representations

@emessori
Copy link
Author

Thank you so much for answering: in fact, that is what I decided to do, but it's nice to have confirmation that's right.
Do the learned representations refer to the "original" images or to their augmented versions?

Thank you again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants