Skip to content

majita06/Relighting_in_the_Wild

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

70 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Relighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation

teaser

This code is our implementation of the following paper:

Daichi Tajima, Yoshihiro Kanamori, Yuki Endo: "Relighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation," Computer Graphics Forum (Proc. of Pacific Graphics 2021), 2021. [Project][PDF]

Prerequisites

Run the following code to install all pip packages.

pip3 install -r requirements.txt

Demo

  1. Make a "trained_models" directory in the parent directory.
  2. Download our two pre-trained models and put "model_1st.pth" and "model_2nd.pth" into the "trained_models" directory.

Applying to images

To relight images under ./data/sample_images, run the following code:

sh ./scripts/demo_image.sh ./data/sample_images

The relighting results will be saved in ./demo/relighting_image/2nd. NOTE: If you want to change the light for relighting, please edit the script directly.

Applying to videos

To relight video frames under ./data/test_video/sample_frames, run the following code:

sh ./scripts/demo_video.sh ./data/test_video/sample_frames

You can confirm the output video for each epoch in the ./demo/relighting_video/flicker_reduction directory. Please terminate the training manually (by Ctrl-c) before noise appears in the result. For the test video, we stopped at 11 epoch to create our result. NOTE: If you want to change the light for relighting, please edit the script directly.

Training

1st stage network

  1. Prepare the following datasets.
  • Put binary masks ("XXX_mask.png"), albedo maps ("XXX_tex.png"), transport maps ("XXX_transport.npz") and skin masks ("XXX_parsing.png") from 3D models in ./data/train_human_1st and ./data/test_human_1st
  • Put SH light ("YYY.npy") from environment maps in ./data/train_light_1st and ./data/test_light_1st
  1. Run train_1st.py
python3 train_1st.py --train_dir ./data/train_human_1st --test_dir ./data/test_human_1st ./data/train_light --train_light_dir --test_light_dir ./data/test_light --out_dir ./result/output_1st

2nd stage network

  1. Reconstruct the real photo dataset by a trained 1st stage model.
python3 make_dataset_2nd.py --in_dir ./data/real_photo_dataset --out_dir_train ./data/train_human_2nd --out_dir_test ./data/test_human_2nd --model_path ./trained_models/model_1st.pth

NOTE: Real photo dataset will be published soon.

  1. Run train_2nd.py.
python3 train_2nd.py --train_dir ./data/train_human_2nd --test_dir ./data/test_human_2nd --out_dir ./result/output_2nd

Citation

Please cite our paper if you find the code useful:

@article{tajimaPG21,
  author    = {Daichi Tajima,
               Yoshihiro Kanamori,
               Yuki Endo},
  title     = {Relighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation},
  journal   = {Computer Graphics Forum (Proc. of Pacific Graphics 2021)},
  volume    = {40},
  number    = {7},
  pages     = {205--216},
  year      = {2021}
}

LICENSE

We distribute our source codes and pre-trained models for research purpose only under the CC BY-NC-SA 4.0 license. We prohibit commercial use without our permission.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published