Pytorch implementation for multimodal comic-to-manga translation.
Note: The current software works well with PyTorch 1.6.0+.
- Linux
- Python 3
- CPU or NVIDIA GPU + CUDA CuDNN
- Clone this repo:
git clone https://github.com/msxie/ScreenStyle.git
cd ScreenStyle/MangaScreening
- Install PyTorch and dependencies from http://pytorch.org
- Install python libraries tensorboardX
- Install other libraries For pip users:
pip install -r requirements.txt
The training requires paired data (including manga image, western image and their line drawings). The line drawing can be extracted using MangaLineExtraction.
${DATASET}
|-- color2manga
| |-- val
| | |-- ${FOLDER}
| | | |-- imgs
| | | | |-- 0001.png
| | | | |-- ...
| | | |-- line
| | | | |-- 0001.png
| | | | |-- ...
-
Download the pre-trained ScreenVAE model and place under
checkpoints/ScreenVAE/
folder. -
Download the pre-trained color2manga model and place under
checkpoints/color2manga/
folder. -
Generate results with the model
bash ./scripts/test_western2manga.sh
You are granted with the LICENSE for both academic and commercial usages.
If you find the code helpful in your resarch or work, please cite the following papers.
@article{xie-2020-manga,
author = {Minshan Xie and Chengze Li and Xueting Liu and Tien-Tsin Wong},
title = {Manga Filling Style Conversion with Screentone Variational Autoencoder},
journal = {ACM Transactions on Graphics (SIGGRAPH Asia 2020 issue)},
month = {December},
year = {2020},
volume = {39},
number = {6},
pages = {226:1--226:15}
}
This code borrows heavily from the pytorch-CycleGAN-and-pix2pix repository.