Skip to content

Make block learning methods more modular #188

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jan 3, 2022
Merged

Conversation

lorenzoh
Copy link
Member

@lorenzoh lorenzoh commented Dec 20, 2021

@darsnack
Copy link
Member

I took a look over the updated notebook. The FastAI.jl hooks make sense to me, and it seems like the boilerplate has been reduced quite a bit from the original example. The only thing that's confusing for me is why EmbeddingMethod requires so many blocks (e.g. x, y, ŷ, sample, etc.). Maybe an explanation or comment in the code describing their purpose would help?

In terms of the tutorial aspect, I would add something at the start giving an overview of each piece of FastAI.jl that we will override/extend. Something like:

The FastAI.jl default methods and training loops are designed for supervised learning. To support our unsupervised approach, we will extend FastAI.jl by
- creating a data iterator over only the samples (no labels)
- defining our own block method, `EmbeddingMethod` (the analogue to `SupervisedMethod`)
- defining our own FluxTraining.jl phase, `VAETrainingPhase`, which will take the gradients with respect to our unsupervised loss correctly

@lorenzoh
Copy link
Member Author

lorenzoh commented Dec 20, 2021

Thanks for the feedback! Planning to either (1) put the source for EmbeddingMethod into FastAI.jl itself or (2) make the definition shorter, using just a function with only the blocks needed for this tutorial, e.g.

function EmbeddingMethod(block, encodings)
    sample = block
    x = y\hat = encodedsample = FastAI.encodedblockfilled(encodings, block)
    return BlockMethod((; sample, x, y\hat, encodedsample), encodings)
end

With then some explanation what those blocks mean and maybe where they're needed (i.e. showoutputs needs y\hat). Would that make the part clearer?

@darsnack
Copy link
Member

With then some explanation what those blocks mean and maybe where they're needed (i.e. showoutputs needs y\hat). Would that make the part clearer?

Yes, that makes it much clearer!

@lorenzoh lorenzoh merged commit 1d4b806 into master Jan 3, 2022
@lorenzoh lorenzoh deleted the lorenzoh/modular-methods branch March 10, 2022 12:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants