This repository contains the implementation of the paper "MAIR: Multi-view Attention Inverse Rendering with 3D Spatially-Varying Lighting Estimation" presented at CVPR 2023.
- 2024-06-03: MAIR initial commit (test code only)
- 2024-09-12: Object insertion script update
We are currently awaiting the review of our extended work, MAIR++. Stay tuned for its release, as the code for MAIR++ will be made available soon.
This code has been verified to work with CUDA 11.8, but the specific versions of PyTorch and CUDA are not strictly required.
-
Create and activate the conda environment:
conda create -n MAIR python=3.9 conda activate MAIR
-
Install PyTorch and CUDA:
conda install pytorch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 pytorch-cuda=11.8 -c pytorch -c nvidia
-
Install additional Python packages:
pip install tqdm termcolor scikit-image imageio nvidia-ml-py3 h5py wandb opencv-python trimesh[easy] einops
To insert objects into the scene, modify the input and output directories in the provided scripts. We include example real-world data in Examples from IBRNet.
Run the following script to get the camera poses:
python cds-mvsnet/img2poses.py
Note: You need to install COLMAP and ImageMagick to extract camera poses and resize images.
Run the following script to predict depth and confidence maps:
python cds-mvsnet/img2depth.py
We use CDS-MVSNet. Special thanks to the authors for sharing the code.
Download the pretrained model and place it in cds-mvsnet/pretrained/
.
Run the following script to predict geometry, material and 3D spatially-varying lighting volume.
python test.py
Download the pretrained model and place it in pretrained/MAIR/
.
Run the following script to insert object in scene.
python ObjectInsertion/oi_main.py