Hongbi Zhou1 and Zhangkai Ni1
1Tongji University
This repository provides the official PyTorch implementation for the paper "Perceptual-GS: Scene-adaptive Perceptual Densification for Gaussian Splatting," ICML 2025. Paper
3D Gaussian Splatting (3DGS) has emerged as a powerful technique for novel view synthesis. However, existing methods struggle to adaptively optimize the distribution of Gaussian primitives based on scene characteristics, making it challenging to balance reconstruction quality and efficiency. Inspired by human perception, we propose scene-adaptive perceptual densification for Gaussian Splatting (Perceptual-GS), a novel framework that integrates perceptual sensitivity into the 3DGS training process to address this challenge. We first introduce a perception-aware representation that models human visual sensitivity while constraining the number of Gaussian primitives. Building on this foundation, we develop a perceptual sensitivity-adaptive distribution to allocate finer Gaussian granularity to visually critical regions, enhancing reconstruction quality and robustness. Extensive evaluations on multiple datasets, including BungeeNeRF for large-scale scenes, demonstrate that Perceptual-GS achieves state-of-the-art performance in reconstruction quality, efficiency, and robustness.
- Quantitative results on reconstruction quality, comparing our method with state-of-the-art methods in terms of PSNR↑, SSIM↑ and LPIPS↓. The best, second-best,and third-best results are high lighted.
- Quantitative results on reconstruction efficiency, comparing our method with state-of-the-art methods in terms of the number of Gaussian primitives (#G)↓ and rendering speed (FPS)↑.
- The quantitative result of the proposed method is based on different models on Mip-NeRF 360, Tanks & Temples, and Deep Blending. Metrics are averaged across the scenes. The improvements and reductions in the metrics are highlighted.
- The quantitative result of the proposed method is based on different models on BungeeNeRF. We present metrics averaged on the dataset and from three single scenes.
- The quantitative result of the proposed method is based on CoR-GS on 24-view Mip-NeRF 360. Metrics are averaged across the scenes.
To start, we prefer creating the environment using conda:
conda create -n perceptual_gs python=3.7
conda activate perceptual_gs
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
Getting the data (We follow suggestions from HAC)
-
The Mip-NeRF 360 scenes are provided by the paper author here. And we test on its entire 9 scenes
bicycle, bonsai, counter, garden, kitchen, room, stump, flowers, treehill
. -
The SfM datasets for Tanks&Temples and Deep Blending are hosted by 3D-Gaussian-Splatting here. Download and uncompress them into the
data/
folder. -
The BungeeNeRF dataset is available in Google Drive/百度网盘[提取码:4whv].
Before training, you should run preprocess.py
to generate perceptual sensitivity maps:
python preprocess.py
Then, all scenes can be trained, rendered and evaluated through the following command:
python run.py
This project is built upon 3DGS. Please follow the license of 3DGS. We thank all the authors for their great work and repos.
Thanks for your attention! If you have any suggestion or question, feel free to leave a message here or contact Dr. Zhangkai Ni (eezkni@gmail.com).