Arxiv Paper | CVPR Paper | Poster | BibTex
Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting
Yanhong Zeng, Jianlong Fu, Hongyang Chao, and Baining Guo.
In CVPR 2019.
Existing inpainting works either fill missing regions by copying fine-grained image patches or generating semantically reasonable patches (by CNN) from region context, while neglect the fact that both visual and semantic plausibility are highly-demanded.
Our proposals combine these two mechanisms by,
- Cross-Layer Attention Transfer (ATN). We use the learned region affinity from high-lelvel feature maps to guide feature transfer in adjacent low-level layers in an encoder.
- Pyramid Filling. We fill holes multiple times (depends on the depth of the encoder) by using ATNs from deep to shallow.
We re-implement PEN-Net in Pytorch for faster speed, which is slightly different from the original Tensorflow version used in our paper. Each triad shows original image, masked input and our result.
- Requirements:
- Install python3.6
- Install pytorch (tested on Release 1.1.0)
- Training:
- Prepare training images filelist [our split]
- Modify celebahq.json to set path to data, iterations, and other parameters.
- Our codes are built upon distributed training with Pytorch.
- Run
python train.py -c [config_file] -n [model_name] -m [mask_type] -s [image_size]
. - For example,
python train.py -c configs/celebahq.json -n pennet -m square -s 256
- Resume training:
- Run
python train.py -n pennet -m square -s 256
.
- Run
- Testing:
- Run
python test.py -c [config_file] -n [model_name] -m [mask_type] -s [image_size]
. - For example,
python test.py -c configs/celebahq.json -n pennet -m square -s 256
- Run
- Evaluating:
- Run
python eval.py -r [result_path]
- Run
CELEBA-HQ | DTD | Facade | Places2
Download the model dirs and put it under release_model/
Visualization on TensorBoard for training is supported.
Run tensorboard --logdir release_model --port 6006
to view training progress.
If any part of our paper and code is helpful to your work, please generously cite with:
@inproceedings{yan2019PENnet,
author = {Zeng, Yanhong and Fu, Jianlong and Chao, Hongyang and Guo, Baining},
title = {Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
pages={1486--1494},
year = {2019}
}
Licensed under an MIT license.