Skip to content

Official implementation of "Global and local attention-based free-form image inpainting"

License

Notifications You must be signed in to change notification settings

SayedNadim/Global-and-Local-Attention-Based-Free-Form-Image-Inpainting

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

57 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PWC

I am currently refactoring the codes for the latest version of PyTorch. I will update the codes and upload the pretrained models (e.g. Places and CelebA) soon. Apologies for the inconvenience.

Please checkout "sensor_version" for Places2 weights. Please let me know if you face any issue.

This is the official implementation of the paper "Global and Local Attention-Based Free-Form Image Inpainting" published in Sensors (paper). Currently we are reformatting the codes. We will upload the pretrained models soon.

Prerequisite

  • Python3
  • PyTorch 1.0+ (The code works up to PyTorch 1.4. There seems to be an auto-grad problem with PyTorch 1.5. I will update the code for PyTorch 1.5 after finding the underlying issue.)
  • Torchvision 0.2+
  • PyYaml

Citation

If you find our paper and code beneficial for your work, please consider citing us!

@article{uddin2020global,
  title={Global and Local Attention-Based Free-Form Image Inpainting},
  author={Uddin, SM and Jung, Yong Ju},
  journal={Sensors},
  volume={20},
  number={11},
  pages={3204},
  year={2020},
  publisher={Multidisciplinary Digital Publishing Institute}
}

How to train

  • Set directory path in "configs/config.yaml". -- Set dataset name, if needed. -- If the dataset has subfolders, set "data_with_subfolder" to "True".
  • Run python train.py --config configs/config.yaml
  • To resume, set "resume" to True in "configs/config.yaml". Currently it overwrites the previous checkpoints. Updated code will have checkpoints listed.
  • To view training, run
    tensorboard --logdir checkpoints/DATASET_NAME/hole_benchmark

How to test

  • Modify "test_single.py" as per need and run.
  • Bulk testing code will be uploaded soon.
  • Pretrained models will be uploaded soon.

Some Results

  • Places dataset alt text
  • ImageNet dataset alt text
  • CelebA dataset alt text
  • Ablation study of the modules alt text

Acknowledgement

  • Code base: This code is heavily relied on this repo. Kudus!!!

About

Official implementation of "Global and local attention-based free-form image inpainting"

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages