Zhibing Li1,
Tong Wu1 †,
Jing Tan1,
Mengchen Zhang2,3,
Jiaqi Wang3,
Dahua Lin1,3 †
1The Chinese University of Hong Kong
2Zhejiang University
3Shanghai AI Laboratory
†: Corresponding Authors
v2_1.mp4
- Release inference code and pretrained checkpoints.
- Release training code.
Our environment has been tested on CUDA 11.8 with A100.
git clone git@github.com:Lizb6626/IDArb.git && cd IDArb
conda create -n idarb python==3.8 -y
conda activate idarb
conda install pytorch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -r requirements.txt
python main.py --data_dir example/single --output_dir output/single --input_type single
For multi-view intrinsic decomposition, camera pose can be incorporated by enabling the --cam
option.
## --num_views: number of input views
# Without camera pose information
python main.py --data_dir example/multi --output_dir output/multi --input_type multi --num_views 4
# With camera pose information
python main.py --data_dir example/multi --output_dir output/multi --input_type multi --num_views 4 --cam
This project relies on many amazing repositories. Thanks to the authors for sharing their code and data.
@article{li2024idarb,
author = {Li, Zhibing and Wu, Tong and Tan, Jing and Zhang, Mengchen and Wang, Jiaqi and Lin, Dahua},
title = {IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and Illuminations},
journal = {arXiv preprint arXiv:2412.12083},
year = {2024},
}