Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Camelyon16: pretrained embedders #40

Closed
Bontempogianpaolo1 opened this issue May 25, 2022 · 2 comments
Closed

Camelyon16: pretrained embedders #40

Bontempogianpaolo1 opened this issue May 25, 2022 · 2 comments

Comments

@Bontempogianpaolo1
Copy link

Hi @binli123,
Given the data obtained as #39, I extracted the features using both model-v0 and model-v2. The differences between their performances on the downstream task are evident. Here is the AUC:
image

Looking at #12 you say they differ for batch size and training time. Could you be more specific?

@binli123
Copy link
Owner

In my case both are converging, v2 converges significantly slower at around 100 epoch. If you load "init.pth" it would be much faster.

@binli123
Copy link
Owner

I incorporated a simple weights initialization method which helps stabilize the training in the latest commit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants