We add an experimental training code to help developers train their own compression models for a specific range of bit-rate. Please check the train/
sub-directory.
This is the implementation of ther paper,
Yueyu Hu, Wenhan Yang, Jiaying Liu, Coarse-to-Fine Hyper-Prior Modeling for Learned Image Compression, AAAI Conference on Artificial Intelligence (AAAI), 2020
and also the journal version,
Yueyu Hu, Wenhan Yang, Zhan Ma, Jiaying Liu, Learning End-to-End Lossy Image Compression: A Benchmark, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021.
This is the PyTorch version of Coarse-to-Fine Hyper-Prior Model. The code load and convert weights trained with TensorFlow, originally provided at Coarse2Fine-ImaComp. Besides, this version contains several improvements over the original one:
- We have a brand new arithmetic coder implementation (in C++). It makes the encoding and decoding significantly faster (~10 times and more).
- We now have full support of GPU accelerated encoding and decoding. It can be toggled by "--device cuda".
- Partitioning is implemented, providing the support of compressing and decompressing images in GPUs with limited memory.
These new features are still being tested. If you encounter any problem, please feel free to contact me.
Before running the python script, you need to compile the arithmetic coder, with:
g++ module_arithmeticcoding.cpp -o module_arithmeticcoding
You may first download the trained weights from Google Drive and place the .pk
files under the models
folder (that is, to make './models/model0_qp1.pk
exist).
python AppEncDec.py -h
python AppEncDec.py compress example.png example.bin --qp 1 --model_type 0 --device cuda
python AppEncDec.py decompress example.bin example_dec.png --device cuda
Detailed command line options are documented in the help
mode of the APP.