Skip to content

Commit

Permalink
add pix2pix and cyclegan docs (PaddlePaddle#45)
Browse files Browse the repository at this point in the history
* add docs

* update
  • Loading branch information
ceci3 authored Oct 27, 2020
1 parent 942bb21 commit b4339a5
Show file tree
Hide file tree
Showing 5 changed files with 195 additions and 1 deletion.
Binary file added docs/imgs/cyclegan.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/imgs/horse2zebra.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/imgs/pix2pix.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
99 changes: 98 additions & 1 deletion docs/tutorials/pix2pix_cyclegan.md
Original file line number Diff line number Diff line change
@@ -1 +1,98 @@
## to be added
# 1 Pix2pix

## 1.1 Principle

Pix2pix uses paired images for image translation, which has two different styles of the same image as input, can be used for style transfer. Pix2pix is encouraged by cGAN, cGAN inputs a noisy image and a condition as the supervision information to the generation network, pix2pix uses another style of image as the supervision information input into the generation network, so the fake image is related to another style of image which is input as supervision information, thus realizing the process of image translation.

## 1.2 How to use

### 1.2.1 Prepare Datasets

Paired datasets used by Pix2pix can be download from [here](http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/)
For example, the structure of facades is as following:
```
facades
├── test
├── train
└── val
```
You can download from wget, download facades from wget for example:
```
wget https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/facades.zip --no-check-certificate
```

### 1.2.2 Train/Test

Datasets used in example is facades, you can change it to your own dataset in the config file.

Train a model:
```
python -u tools/main.py --config-file configs/pix2pix_facades.yaml
```

Test the model:
```
python tools/main.py --config-file configs/pix2pix_facades.yaml --evaluate-only --load ${PATH_OF_WEIGHT}
```

## 1.3 Results

![](../imgs/horse2zebra.png)

[model download](TODO)



# 2 CycleGAN

## 2.1 Principle

CycleGAN uses unpaired pictures for image translation, input two different images with different styles, and automatically perform style transfer. CycleGAN consists of two generators and two discriminators, generator A is inputting images of style A and outputting images of style B, generator B is inputting images of style B and outputting images of style A. The biggest difference between CycleGAN and pix2pix is that CycleGAN can realize image translation without establishing a one-to-one mapping between the source domain and the target domain.

![](../imgs/cyclegan.png)

## 2.2 How to use

### 2.2.1 Prepare Datasets

Unpair datasets used by CycleGAN can be download from [here](https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/)
For example, the structure of cityscapes is as following:
```
cityscapes
├── test
├── testA
├── testB
├── train
├── trainA
└── trainB
```
You can download from wget, download facades from wget for example:
```
wget http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/facades.tar.gz --no-check-certificate
```

### 2.2.2 Train/Test

Datasets used in example is cityscapes, you can change it to your own dataset in the config file.

Train a model:
```
python -u tools/main.py --config-file configs/cyclegan_cityscapes.yaml
```

Test the model:
```
python tools/main.py --config-file configs/cyclegan_cityscapes.yaml --evaluate-only --load ${PATH_OF_WEIGHT}
```

## 2.3 Results

![](../imgs/A2B.png)

[model download](TODO)


# References

1. [Image-to-Image Translation with Conditional Adversarial Networks](https://arxiv.org/abs/1611.07004)
2. [Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks](https://arxiv.org/abs/1703.10593)
97 changes: 97 additions & 0 deletions docs/tutorials/pix2pix_cyclegan_cn.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# 1 Pix2pix

## 1.1 原理介绍

Pix2pix利用成对的图片进行图像翻译,即输入为同一张图片的两种不同风格,可用于进行风格迁移。Pix2pix是在cGAN的基础上进行改进的,cGAN的生成网络不仅会输入一个噪声图片,同时还会输入一个条件作为监督信息,pix2pix则是把另外一种风格的图像作为监督信息输入生成网络中,这样生成的fake图像就会和作为监督信息的另一种风格的图像相关,从而实现了图像翻译的过程。
![](../imgs/pix2pix.png)

## 1.2 如何使用

### 1.2.1 数据准备

Pix2pix使用成对数据作为训练数据,训练数据可以从[这里](http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/)下载。
例如,pix2pix所使用的facades数据的组成形式为:
```
facades
├── test
├── train
└── val
```

也可以通过wget的方式进行数据下载,例如facades数据集的下载方式为:
```
wget http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/facades.tar.gz --no-check-certificate
```

### 1.2.2 训练/测试
示例以facades数据为例。如果您想使用自己的数据集,可以在配置文件中修改数据集为您自己的数据集。

训练模型:
```
python -u tools/main.py --config-file configs/pix2pix_facades.yaml
```

测试模型:
```
python tools/main.py --config-file configs/pix2pix_facades.yaml --evaluate-only --load ${PATH_OF_WEIGHT}
```

## 1.3 结果展示

![](../imgs/horse2zebra.png)

[模型下载](TODO)


# 2 CycleGAN

## 2.1 原理介绍

CycleGAN可以利用非成对的图片进行图像翻译,即输入为两种不同风格的不同图片,自动进行风格转换。CycleGAN由两个生成网络和两个判别网络组成,生成网络A是输入A类风格的图片输出B类风格的图片,生成网络B是输入B类风格的图片输出A类风格的图片。CycleGAN和pix2pix最大的不同就是CycleGAN在源域和目标域之间无需建立数据间一对一的映射就可以实现图像翻译。

## 2.2 如何使用

### 2.2.1 数据准备

CycleGAN使用的是非成对的数据,训练数据可以从[这里](https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/)下载。
例如,cycleGAN所使用的cityscapes数据的组成形式为:
```
cityscapes
├── test
├── testA
├── testB
├── train
├── trainA
└── trainB
```

也可以通过wget的方式进行数据下载,例如facades数据集的下载方式为:
```
wget https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/facades.zip --no-check-certificate
```

### 2.2.2 训练/测试

示例以cityscapes数据为例。如果您想使用自己的数据集,可以在配置文件中修改数据集为您自己的数据集。

训练模型:
```
python -u tools/main.py --config-file configs/cyclegan_cityscapes.yaml
```

测试模型:
```
python tools/main.py --config-file configs/cyclegan_cityscapes.yaml --evaluate-only --load ${PATH_OF_WEIGHT}
```

## 2.3 结果展示

![](../imgs/A2B.png)

[模型下载](TODO)


# 参考:
1. [Image-to-Image Translation with Conditional Adversarial Networks](https://arxiv.org/abs/1611.07004)
2. [Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks](https://arxiv.org/abs/1703.10593)
[Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks](https://arxiv.org/abs/1703.10593)

0 comments on commit b4339a5

Please sign in to comment.