You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: guide/14-deep-learning/How CycleGAN Works.ipynb
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@
12
12
"metadata": {},
13
13
"source": [
14
14
"## Introduction\n",
15
-
"There are situations when we have two different domains of images, which are randomly organized or unpaired and we want to convert images from one domain to another just like [Pix2Pix](https://developers.arcgis.com/python/guide/how-pix2pix-works/).\n",
15
+
"CycleGAN is and Image to Image translation model, just like [Pix2Pix](https://developers.arcgis.com/python/guide/how-pix2pix-works/). There are some challenges in Pix2Pix is that the images should be paired which are randomly organized or unpaired and we want to convert images from one domain to another just like \n",
16
16
"\n",
17
17
"The Cycle Generative Adversarial Network, or CycleGAN, is an approach to training a deep convolutional neural network for image-to-image translation tasks. The Network learns mapping between input and output images using unpaired dataset. For Example: Generating RGB imagery from SAR, multispectral imagery from RGB, map routes from satellite imagery, etc.\n",
0 commit comments