Skip to content

[Preprint] "TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up", Yifan Jiang, Shiyu Chang, Zhangyang Wang

License

Notifications You must be signed in to change notification settings

apri0426/TransGAN

 
 

Repository files navigation

TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up

Code used for TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up.

Implementation

  • checkpoint gradient using torch.utils.checkpoint
  • 16bit precision training
  • Distributed Training (Faster!)
  • IS/FID Evaluation
  • Gradient Accumulation

Main Pipeline

Main Pipeline

Representative Visual Results

Cifar Visual Results Visual Results

README waits for updated

Acknowledgement

Codebase from AutoGAN, pytorch-image-models

Citation

if you find this repo is helpful, please cite

@article{jiang2021transgan,
  title={TransGAN: Two Transformers Can Make One Strong GAN},
  author={Jiang, Yifan and Chang, Shiyu and Wang, Zhangyang},
  journal={arXiv preprint arXiv:2102.07074},
  year={2021}
}

About

[Preprint] "TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up", Yifan Jiang, Shiyu Chang, Zhangyang Wang

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 91.6%
  • Cuda 5.1%
  • C++ 2.3%
  • Shell 1.0%