Skip to content

Commit 6a91c2c

Browse files
committed
added ref. no. in figure and some minor changes
1 parent 68ac68e commit 6a91c2c

File tree

1 file changed

+11
-5
lines changed

1 file changed

+11
-5
lines changed

guide/14-deep-learning/How CycleGAN Works.ipynb

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
"metadata": {},
1313
"source": [
1414
"## Introduction\n",
15-
"There are situations when we have two different domains of images, which are randomly organized or unpaired and we want to convert images from one domain to another just like Pix2Pix.\n",
15+
"There are situations when we have two different domains of images, which are randomly organized or unpaired and we want to convert images from one domain to another just like [Pix2Pix](https://developers.arcgis.com/python/guide/how-pix2pix-works/).\n",
1616
"\n",
1717
"The Cycle Generative Adversarial Network, or CycleGAN, is an approach to training a deep convolutional neural network for image-to-image translation tasks. The Network learns mapping between input and output images using unpaired dataset. For Example: Generating RGB imagery from SAR, multispectral imagery from RGB, map routes from satellite imagery, etc.\n",
1818
"\n",
@@ -37,14 +37,14 @@
3737
"cell_type": "markdown",
3838
"metadata": {},
3939
"source": [
40-
"<center>Figure 1. Overview of CycleGAN architecture: Translating from satellite image to map routes domain</center>"
40+
"<center>Figure 1. Overview of CycleGAN architecture: Translating from satellite image to map routes domain [3]</center>"
4141
]
4242
},
4343
{
4444
"cell_type": "markdown",
4545
"metadata": {},
4646
"source": [
47-
"To know about basics of GAN, you can refer to the Pix2Pix guide."
47+
"To know about basics of GAN, you can refer to the [Pix2Pix guide](https://developers.arcgis.com/python/guide/how-pix2pix-works/)."
4848
]
4949
},
5050
{
@@ -58,8 +58,7 @@
5858
"* Domain-B -> **Generator-A** -> Domain-A\n",
5959
"* Domain-A -> **Generator-B** -> Domain-B\n",
6060
"\n",
61-
"Each generator has a corresponding discriminator model. The first discriminator model (Discriminator-A) takes real images from Domain-A and generated images from Generator-A and predicts whether they are real or fake. The second discriminator model (Discriminator-B) takes real images from Domain-B and generated images from Generator-B and predicts whether they are real or fake.\n",
62-
"\n",
61+
"Each generator has a corresponding discriminator model (Discriminator-A and Discriminator-B). The discriminator model takes real images from Domain and generated images from Generator to predict whether they are real or fake.\n",
6362
"\n",
6463
"\n",
6564
"* Domain-A -> **Discriminator-A** -> [Real/Fake]\n",
@@ -161,6 +160,13 @@
161160
"\n",
162161
"[3]. Kang, Yuhao, Song Gao, and Robert E. Roth. \"Transferring multiscale map styles using generative adversarial networks.\" International Journal of Cartography 5, no. 2-3 (2019): 115-141."
163162
]
163+
},
164+
{
165+
"cell_type": "code",
166+
"execution_count": null,
167+
"metadata": {},
168+
"outputs": [],
169+
"source": []
164170
}
165171
],
166172
"metadata": {

0 commit comments

Comments
 (0)