Skip to content

Code repo to the 2024 CoLLAs Paper "Reflecting on the State of Rehearsal-free Continual Learning with Pretrained Models"

License

Notifications You must be signed in to change notification settings

ExplainableML/ReflectingOnRFCL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reflecting on the State of Rehearsal-Free Continual Learning with Pretrained Models

Thumbnail

Overview

This repository contains the code and resources for our paper, "Reflecting on the State of Rehearsal-Free Continual Learning with Pretrained Models" published at CoLLAs 2024. The paper examines the effectiveness of rehearsal-free continual learning (RFCL) methods using parameter-efficient finetuning (PEFT) techniques on pretrained models. We investigate the influence of query-based mechanisms in RFCL, revealing that simpler PEFT methods often match or exceed the performance of more complex systems. Our findings aim to provide a grounded understanding of RFCL with pretrained models.

Table of Contents

  1. Installation
  2. Setup Instructions
  3. Usage
  4. Experiments
  5. Citing This Work
  6. License

Installation

To get started, clone this repository and set up your environment.

    git clone https://github.com/username/repository.git
    cd repository

Setup Instructions

Follow these steps to set up the environment and dependencies for this project.

Environment Setup:

conda create -n rfcl python=3.8
conda activate rfcl

Dependencies: Install the required libraries and dependencies:

pip install -r requirements.txt

Usage

Once setup is complete, you can run the experiments as described below. Each script is organized to reproduce specific parts of the study.

Basic Usage

To reproduce the results from the paper, run the following command:

python main.py --config-name <config_name>

We provide a list of configuration files for each experiment in the configs directory. You can specify the desired configuration file to run the corresponding experiment. All config files default to the Split CIFAR100 benchmark. For further benchmarks, change the data config in line 2 to one of the config files in the data directory.

Experiments

This section provides an outline for running the experiments described in the paper.

Experiments Table 1: In this section, we provide detailed instructions for running the experiments in Table 1 of the paper. To get the base performance of L2P, DP, CODA and HiDe, run the following command:

python run_experiment1.py --config configs/vit_<method>.yaml

replacing with the desired method (l2p, dp, coda, hide).

To reproduce our experiments with the oracle query function, replace the query config in the config file with ViT-B_16_ft.

python run_experiment1.py --config configs/vit_<method>.yaml ++module.model.query="ViT-B_16_ft"

Note that this is only implemented for the L2P, DP, and CODA methods on CIFAR100 and ImageNet-r and expects a ViT-B_16 model fine-tuned on the target task under 'data/cifar100/cifar100_finetune.pt'.

Experiments Table 2 and 3: In this section, we provide detailed instructions for running the experiments in Table 2 of the paper. To reproduce the results presented in Tables 2 and 3 execute the following command:

python run_experiment2.py --config configs/vit_<method>.yaml

replacing with the desired method:

Experiments Table 5: In this section, we provide detailed instructions for running the experiments in Table 5 and Figure 2 of the paper. To reproduce the results presented in Table 5 and Figure 2 execute the following command:

python run_experiment2.py --config configs/vit_onlyprompt_<reg_method>.yaml

replacing <reg_method> with the desired regularization method:

Citing This Work

If you find this work useful, please consider citing our paper:

@article{YourLastName2024,
  title={Reflecting on the State of Rehearsal-Free Continual Learning with Pretrained Models},
  author={Your Name and Collaborator Names},
  journal={Conference/Journal Name},
  year={2024},
  url={Link to paper or repository}
}

License

This codebase is based on the codebase of the LAE paper by Quiankun Gao and is licensed under the apache-2.0 license. For the implementation of the baseline methods code has been integrated from the following repositories:

About

Code repo to the 2024 CoLLAs Paper "Reflecting on the State of Rehearsal-free Continual Learning with Pretrained Models"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •