Official PyTorch implementation of our ICLR 2025 paper for general continual learning "Advancing Prompt-Based Methods for Replay-Independent General Continual Learning". (ArXiv/Camera-Ready version is coming soon!)
Our proposed MISA(Mask and Initial-Session Adaptation) consists of the forgetting-aware initial session adaptation and the non-parametric logit mask to facilitate general continual learning, as presented in the following figure:
Please make sure that necessary packages in the environment.yml file are available.
To warm up the prompt parameters by our initial session adaptation and our proposed forgetting-aware minimization, please run:
. scripts/misa_fam.sh
The warmed-up prompts will be stored in the pretrained_prompt/ folder.
We provide the pretrained prompt parameters in the pretrained_prompt/ folder to faciliate the reproduction of our results.
To test different methods with different datasets, simply run the corresponding script with the specific dataset entry in the file:
. scripts/misa.sh
This implementation is developed based on the source code of MVP.
If you find our codes or paper useful, please consider giving us a star or citing our work.
@inproceedings{kang2025advancing,
title={Advancing Prompt-Based Methods for Replay-Independent General Continual Learning},
author={Zhiqi KANG and Liyuan Wang and Xingxing Zhang and Karteek Alahari},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025}
}
