Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix --reduce_memory in finetune_on_pregenerated #515

Merged
merged 2 commits into from
Apr 23, 2019
Merged

Fix --reduce_memory in finetune_on_pregenerated #515

merged 2 commits into from
Apr 23, 2019

Conversation

Rocketknight1
Copy link
Member

On reviewing the code I realized the --reduce_memory code path in finetune_on_pregenerated.py had a bug, but also wasn't getting used because the relevant argument wasn't getting passed correctly. The bugs have been fixed and the argument is now passed correctly. Performance still seems good, so now it should be possible to train without loading the whole epoch of training data into memory.

@thomwolf thomwolf merged commit c36cca0 into huggingface:master Apr 23, 2019
@thomwolf
Copy link
Member

Good catch!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants