SAH-SCI: Self-Supervised Adapter for Efficient Hyperspectral Snapshot Compressive Imaging (ECCV 2024)
Haijin Zeng*,Yuxi Liu*, Yongyong Chen, Youfa Liu, Chong Peng, Jingyong Su
IMEC-Ghent University Belgium, Harbin Institute of Technology (Shenzhen)
HDNET | HDNET+SAH | MSTPP | MSTPP+SAH |
---|---|---|---|
![]() |
![]() |
![]() |
![]() |
Abstract
Hyperspectral image (HSI) reconstruction is vital for recovering spatial-spectral information from compressed measurements in coded aperture snapshot spectral imaging (CASSI) systems. Despite the effectiveness of end-to-end and deep unfolding methods, their reliance on substantial training data poses challenges, notably the scarcity of labeled HSIs. Existing approaches often train on limited datasets, such as KAIST and CAVE, leading to biased models with poor generalization capabilities. Addressing these challenges, we propose a universal Self-Supervised Adapter for Hyperspectral Snapshot Compressive Imaging (SAH-SCI). Unlike full fine-tuning or linear probing, SAH-SCI enhances model gen- eralization by training a lightweight adapter while preserving the original model’s parameters. We propose a novel approach that combines spectral and spatial adaptation to enhance an image model’s capacity for spatial-spectral reasoning. Additionally, we introduce a customized adapter self-supervised loss function that captures the consistency, group invariance and image uncertainty of CASSI imaging. This approach effectively reduces the solution space for ill-posed HSI reconstruction. Experimental results demonstrate SAH’s superiority over previous methods with fewer parameters, offering simplicity and adaptability to any end-to-end or unfolding methods. Our approach paves the way for leveraging more robust image foundation models in future hyperspectral imaging tasks.
Mask
Download the pretrained model zoo from (Google Drive / Baidu Disk, code: mst1
) or your own pretrained model and place them to ./model_zoo/
. Prepare your dataset and place them to ./datasets/your dataset
. We use ICVL ,Harvard and NTIRE 2022 to validate our approach.
This code is based on HDNET trained on ICVL. If you need to use your own dataset and pretrained model, please modify dataset loading method in utils.py
and pre-trained model loading method intrain.py
.
pip install -r requirements.txt
python train.py \
--gpu_id 0 \
--dataset_path ./datasets/ICVL/ \
--mask_path ./datasets/mask/ \
--model_path ./checkpoint
The trained SAH model should be put into ./checkpoint
.
python test.py \
--gpu_id 0 \
--dataset_path ./datasets/ICVL/ \
--checkpoint_path ./checkpoint/(trained SAH.pth) \
If you find the code helpful in your research or work, please cite the following paper:
@InProceedings{liu2024sahsci,
title = {SAH-SCI: Self-Supervised Adapter for Efficient Hyperspectral Snapshot Compressive Imaging},
author = {Haijin Zeng and Yuxi Liu and Yongyong Chen and Youfa Liu and Chong Peng and Jingyong Su},
booktitle = {ECCV},
year = {2024},
}
This implementation is based on / inspired by https://github.com/caiyuanhao1998/MST. Thanks for their generous open source.