Skip to content
This repository has been archived by the owner on Jul 1, 2024. It is now read-only.

Support freezing model anywhere in fine tuning #728

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Commits on Mar 25, 2021

  1. Support freezing model anywhere in fine tuning

    Summary:
    Add support for freezing the model anywhere in the fine tuning task. Users can specify a specific module to freeze the model until in finetuning. Functionality is useful for situations, like the FixRes paper, where both the head and the last batch norm layer are trained during fine tuning.
    
    Example fblearner run using freeze_until to freeze the trunk model: f259575699
    Example fblearner run using freeze_until to unfreeze the last batchnorm and head:
    f259575306
    
    - Adds new config option `freeze_until` to specify what point to freeze the model to. Options are `head` or the name of a module in the model. The model will be frozen until but not including that module and unfrozen at that point onwards. `freeze_until: 'head'` has the same functionality as `freeze_trunk: true`.
    - Adds documentation for fine tuning task
    
    Differential Revision: D27199092
    
    fbshipit-source-id: b12dc00563da45806317f60e2abbcc4237bec94c
    lauragustafson authored and facebook-github-bot committed Mar 25, 2021
    Configuration menu
    Copy the full SHA
    e833da0 View commit details
    Browse the repository at this point in the history