Skip to content

tmlr-group/DecoupledVP

 
 

Repository files navigation

Understanding Model Reprogramming for CLIP via Decoupling Visual Prompts

License: MIT Static Badge Static Badge

Abstract: Model reprogramming adapts pretrained models to downstream tasks by modifying only the input and output spaces. Visual reprogramming (VR) is one instance for vision tasks that adds a trainable noise pattern (i.e., a visual prompt) to input images to facilitate downstream classification. The existing VR approaches for CLIP train a single visual prompt using all descriptions of different downstream classes. However, the limited learning capacity may result in (1) a failure to capture diverse aspects of the descriptions (e.g., shape, color, and texture), and (2) a possible bias toward less informative attributes that do not help distinguish between classes. In this paper, we introduce a decoupling-and-reweighting framework. Our decoupled visual prompts (DVP) are optimized using descriptions grouped by explicit causes (DVP-cse) or unsupervised clusters (DVP-cls). Then, we integrate the outputs of these visual prompts with a probabilistic reweighting matrix (PRM) that measures their contributions to each downstream class. Theoretically, DVP lowers the empirical risk bound. Experimentally, DVP outperforms baselines on average across 11 downstream datasets. Notably, the DVP-PRM integration enables insights into how individual visual prompts influence classification decisions, providing a probabilistic framework for understanding reprogramming.

This repository is the official PyTorch implementation of the ICML 2025 paper: Understanding Model Reprogramming for CLIP via Decoupling Visual Prompts, authored by Chengyi Cai, Zesheng Ye, Lei Feng, Jianzhong Qi, and Feng Liu.

Framework

Environment

  • Python (3.10.0)
  • PyTorch (2.0.1)
  • TorchVision (0.15.2)

Installation

conda create -n reprogram
conda activate reprogram
pip install -r requirement.txt

Dataset Preparation

Step 1: Downloading Images

To implement the results, please follow CoOp to download the datasets and modify DOWNSTREAM_PATH = "" in cfg.py of this repository. Resisc45 can be prepared following BlackVIP.

Step 2: Preparing Descriptions

  • Use the Generated Descriptions in Our Paper: Download the .json files provided by us in attributes/ and causes/.
  • Generate Your Customized Causes: Please first enter your API Key in generate_causes.py, then run the code: python generated_attributes.py. You can modify the prompt according to your needs.

Runing the Code for DVP-cls & DVP-cse

python experiments/fs_dvp_cls.py --dataset [dataset]
python experiments/fs_dvp_cse.py --dataset [dataset]

Acknowledgements

This repo is built upon these previous works:

Citation

@inproceedings{cai2025understanding,
    title={Understanding Model Reprogramming for CLIP via Decoupling Visual Prompts},
    author={Chengyi Cai and Zesheng Ye and Lei Feng and Jianzhong Qi and Feng Liu},
    booktitle = {International Conference on Machine Learning},
    year={2025}
}

About

[ICML 2025] "Understanding Model Reprogramming for CLIP via Decoupling Visual Prompts". Official Website: https://github.com/tmlr-group/DecoupledVP

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%