Skip to content

[TPAMI 2025 / NeurIPS 2023] Lookup Table meets Local Laplacian Filter: Pyramid Reconstruction Network for Tone Mapping

License

Notifications You must be signed in to change notification settings

fengzhang427/LLF-LUT

Repository files navigation

LLF-LUT(Lookup Table meets Local Laplacian Filter)

The implementation of NeurIPS 2023 paper "Lookup Table meets Local Laplacian Filter: Pyramid Reconstruction Network for Tone Mapping" and its journal (TPAMI) version "High-resolution Photo Enhancement in Real-time: A Laplacian Pyramid Network".

✨ News

  • 2025/10/13: Release the training code and the testing code of TPAMI version.
  • 2025/10/12: Release our pretrained models of TPAMI version at GoogleDrive and Baidudisk(code:fegh)
  • 2025/10/12: The comprehensive version of this work was accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)

Highlights

image image image image

🚀🚀 Welcome to the repo of LLF-LUT 🚀🚀

LLF-LUT is an effective end-to-end framework for the HDR image tone mapping task performing global tone manipulation while preserving local edge details. Specifically, we build a lightweight transformer weight predictor on the bottom of the Laplacian pyramid to predict the pixel-level content-dependent weight maps. The input HDR image is trilinear interpolated using the basis 3D LUTs and then multiplied with weighted maps to generate a coarse LDR image. To preserve local edge details and reconstruct the image from the Laplacian pyramid faithfully, we propose an image-adaptive learnable local Laplacian filter (LLF) to refine the high-frequency components while minimizing the use of computationally expensive convolution in the high-resolution components for efficiency.

🛄🛄 Disclaimer 🛄🛄

"The disparities observed between the results of CLUT in our study and the original research can be attributed to differences in the fundamental tasks. Specifically, our study focuses on the transformation of 16-bit High Dynamic Range (HDR) images into 8-bit Low Dynamic Range (LDR) images. In contrast, the original paper primarily addressed 8-bit to 8-bit image enhancement. Furthermore, CLUT's parameter count stands at 952K in our paper, a result of the utilization of sLUT as the backbone for CLUT. Notably, when the backbone is modified to LUT, the parameter count is reduced to 292K."

🌟 Structure

The model architecture of LLF-LUT is shown below. Given an input 16-bit HDR image, we initially decompose it into an adaptive Laplacian pyramid, resulting in a collection of high-frequency components and a low-frequency image. The adaptive Laplacian pyramid employs a dynamic adjustment of the decomposition levels to match the resolution of the input image. This adaptive process ensures that the low-frequency image achieves a proximity of approximately 64 × 64 resolution. The described decomposition process possesses invertibility, allowing the original image to be reconstructed by incremental operations.

image

📑Intallation

Download the HDR+ dataset and MIT-Adobe FiveK dataset at the following links:

HDR+ (Original Size (4K)): download (37 GB) Baiduyun(code:vcha); (480p)download (1.38 GB)

MIT-Adobe FiveK (Original Size (4K)): download (50 GB) Baidudisk(code:a9av); (480p)download (12.51 GB)

  • Install the conda environment
conda create -n llf-lut python=3.8.16
conda activate llf-lut
  • Install Pytorch
### example: python 3.8 + pytorch 1.10.0 + cuda 11.3
conda install pytorch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 cudatoolkit=11.3 -c pytorch -c conda-forge
  • Install trilinear_cpp
cd trilinear_cpp
sh setup.sh    # modify setup.sh with your current cuda version

Or you can replace the trilinear interpolation with torch.nn.functional.grid_sample, please refer to Image-Adaptive-3DLUT

✔️Pretrained Models

Release our pretrained models at GoogleDrive and Baidudisk(code:fegh)(TPAMI version pretrained model). Due to company policies, we regret that we cannot release the code and pre-trained models for the NeurIPS version.

🚗Run

  1. evaluate
# Modify the data path and checkpoint path in the configuration file config/eval/xxx.yml.
# Paired data evaluate
python3 eval.py
# Single data evaluate
python3 eval_single.py
  1. train
# Modify the data path in the configuration file config/train/xxx.yml.
python3 train.py

🤝 Acknowledgments

📖 Citation

If you find our LLF-LUT model useful for you, please consider citing 📣

@article{zhang2023lookup,
  title={Lookup table meets local laplacian filter: pyramid reconstruction network for tone mapping},
  author={Zhang, Feng and Tian, Ming and Li, Zhiqiang and Xu, Bin and Lu, Qingbo and Gao, Changxin and Sang, Nong},
  journal={Advances in Neural Information Processing Systems},
  volume={36},
  pages={57558--57569},
  year={2023}
}

@ARTICLE{11204685,
  author={Zhang, Feng and Deng, Haoyou and Li, Zhiqiang and Li, Lida and Xu, Bin and Lu, Qingbo and Cao, Zisheng and Wei, Minchen and Gao, Changxin and Sang, Nong and Bai, Xiang},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={High-resolution Photo Enhancement in Real-time: A Laplacian Pyramid Network}, 
  year={2025},
  volume={},
  number={},
  pages={1-15},
  doi={10.1109/TPAMI.2025.3622041}}

📧Contact

If you have any question, feel free to email fengzhangaia@gmail.com.

About

[TPAMI 2025 / NeurIPS 2023] Lookup Table meets Local Laplacian Filter: Pyramid Reconstruction Network for Tone Mapping

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published