Official repository for the paper "LMM-driven Semantic Image-Text Coding for Ultra Low-bitrate Image Compression" (IEEE VCIP 2024)
Check out our presentation poster!
This is the official repository for the paper "LMM-driven Semantic Image-Text Coding for Ultra Low-bitrate Image Compression". Full paper is available on arXiv.
Please feel free to contact Murai(octachoron(at)suou.waseda.jp), Sun Heming or post an issue if you have any questions.
We prepare demo inference code for google colaboratory. You can check the inference without any environmental setting. Just click the 'Open in Colab' button above.
Python 3.10 and some other packages are needed. Please refer to the How to Use section below.
Our experiments and verification are conducted on Linux(Ubuntu 22.04) and Docker container with cuda=1.2.1 and torch=2.1.
- First, clone this repository.
git clone https://github.com/tokkiwa/TextImageCoding/
cd TextImageCoding
- Download the DiffBIR weights and our pre-trained weights to the
/weightsfolder and/lic-weights/chengfolder respectively.
The weights for DiffBIR is available at https://github.com/XPixelGroup/DiffBIR. We adopt 'v1_general' weights through our experiments.
Our pre-trained weight is avairable at this link.
- Install requirements (using virtual environment is recommended).
pip install -r requirements.txt
Codes for Caption Generation and Compression can be found in llavanextCaption_Compression.ipynb.
We prepare text caption for kodak image datasets. Please download kodak dataset and put in ImageTextCoding/kodak folder, and run
bash run.sh
with necessary specification.
For other datasets, please generate and compress the caption by running llavanextCaption_Compression.ipynb and place the output csv to the df folder, and specify the dataset in run.sh.
Our training code is based on CompressAI. Please run lic/train.sh with specification of the models, datasets and parameters.
Our codes are based on MISC, CompressAI, GPTZip and DiffBIR. We thank the authors for releasing their excellent work.
