Code for virtual try-on with high-fidelity details. The code was developed and tested with Pytorch0.4.1.
- Clone this repo
git clone https://github.com/AIprogrammer/Detailed-virtual-try-on.git.
cd Detailed-virtual-try-on
- Download our pretrained models from Google Drive, and put them in "./pretrained_checkpoint".
- We provide a demo model, as well as some samples in "./dataset/images". Triplets including source image, target pose, target cloth is provided in the "./demo/demo.txt".
- Quick testing and checking results in "./demo/forward/0.jpg" by running
sh demo.sh
- Download the MPV dataset from Image-based Multi-pose Virtual Try On and put the dataset under "./dataset/images/".
- Select postive perspective images, create dataset split file 'data_pair.txt', and put it under "./dataset/".
- Pose keypoints. Use the Openpose, and put the keypoints file in "./dataset/pose_coco".
- Semantic parsing. Use the CIHP_PGN, and put the parsing results in "./dataset/parse_cihp".
- Cloth mask. Use the "GrabCut" method for the cloth mask, and put the mask in "./dataset/cloth_mask".
- Download the VGG19 pretrained checkpoint
cd vgg_model/
wget https://download.pytorch.org/models/vgg19-dcbb9e9d.pth
- Set different configuration based on the "config.py". Then run
sh train.sh
If you find this code helpful, please cite our paper:
@inproceedings{detail2019,
title={Down to the Last Detail: Virtual Try-on with Detail Carving},
author={Wang, Jiahang and Zhang, Wei and Weizhong, Liu and Mei, Tao},
booktitle = {arXiv:1912.06324},
year={2019}
}