|
129 | 129 | "\n", |
130 | 130 | "The `export_training_data()` method generates training samples for training deep learning models, given the input imagery, alongwith labeled vector data or classified images. Deep learning training samples are small subimages, called image chips, and contain the feature or class of interest. This tool creates folders containing image chips for training the model, labels and metadata files and stores them in the raster store of your enterprise GIS. The image chips are often small (e.g. 256x256), unless the training sample size is large. These training samples support model training workflows using the `arcgis.learn` package as well as by third-party deep learning libraies, such as TensorFlow or PyTorch. The supported models in `arcgis.learn` accept the **[PASCAL_VOC_rectangles](http://host.robots.ox.ac.uk/pascal/VOC/databases.html)** format for object detection models, a standardized image dataset for object class recognition. The label files are XML files containing information about image name, class value, and bounding boxes.\n", |
131 | 131 | "\n", |
132 | | - "In order to take advantage of pretrained models that have been trained on large image collections (e.g. ImageNet), we have to pick 3 bands from a multispectral imagery as those pretrained models are trained with images that only 3 RGB channels. The `extract_bands()` method can be used to specify which 3 bands should be extracted for fine tuning the models:" |
| 132 | + "In order to take advantage of pretrained models that have been trained on large image collections (e.g. ImageNet), we have to pick 3 bands from a multispectral imagery as those pretrained models are trained with images that have only 3 RGB channels. The `extract_bands()` method can be used to specify which 3 bands should be extracted for fine tuning the models:" |
133 | 133 | ] |
134 | 134 | }, |
135 | 135 | { |
|
248 | 248 | "source": [ |
249 | 249 | "### Find the good learning rate\n", |
250 | 250 | "\n", |
251 | | - "Now we have define a model architecture, we can start to train it. This process involves setting a good [learning rate](https://towardsdatascience.com/understanding-learning-rates-and-how-it-improves-performance-in-deep-learning-d0d4059c1c10). Picking a very small learning rate leads to very slow training of the model, while picking one that is too high can prevent the model from converging and 'overshoot' the minima where the loss (or error rate) is lowest. `arcgis.learn` includes fast.ai's learning rate finder, accessible through the model's `lr_find()` method, that helps in picking a good learning rate, without needing to experiment with several learning rates and picking from among them. " |
| 251 | + "Now we have defined a model architecture, we can start to train it. This process involves setting a good [learning rate](https://towardsdatascience.com/understanding-learning-rates-and-how-it-improves-performance-in-deep-learning-d0d4059c1c10). Picking a very small learning rate leads to very slow training of the model, while picking one that is too high can prevent the model from converging and 'overshoot' the minima where the loss (or error rate) is lowest. `arcgis.learn` includes fast.ai's learning rate finder, accessible through the model's `lr_find()` method, that helps in picking a good learning rate, without needing to experiment with several learning rates and picking from among them. " |
252 | 252 | ] |
253 | 253 | }, |
254 | 254 | { |
|
289 | 289 | "source": [ |
290 | 290 | "### Train the model\n", |
291 | 291 | "\n", |
292 | | - "As dicussed earlier, the idea of transfer learning is to fine-tune earlier layers of the pretrained model and focuses on training the newly added layers, meaning we need two different learning rates to better fit the model. We have already selected a good learning rate to train the later layers above (i.e. 0.02). An empirical value of lower learning rate for fine-tuning the ealier layers is usually one tenth of the higher rate. We choose 0.001 to be more careful not to disturb the weights of the pretrained backbone by too much. It can be adjusted depending upon how different the imagery is from natural images on which the backbone network is trained.\n", |
| 292 | + "As dicussed earlier, the idea of transfer learning is to fine-tune earlier layers of the pretrained model and focus on training the newly added layers, meaning we need two different learning rates to better fit the model. We have already selected a good learning rate to train the later layers above (i.e. 0.02). An empirical value of lower learning rate for fine-tuning the ealier layers is usually one tenth of the higher rate. We choose 0.001 to be more careful not to disturb the weights of the pretrained backbone by too much. It can be adjusted depending upon how different the imagery is from natural images on which the backbone network is trained.\n", |
293 | 293 | "\n", |
294 | 294 | "Training the network is an iterative process. We can train the model using its `fit()` method till the validation loss (or error rate) continues to go down with each training pass also known as epoch. This is indicative of the model learning the task. " |
295 | 295 | ] |
|
611 | 611 | " conda install -c fastai fastai=1.0.39\n", |
612 | 612 | " conda install -c arcgis arcgis=1.6.0 --no-pin \n", |
613 | 613 | "\n", |
614 | | - "The code below shows how we can use distributed raster analytics to automate the detection of well pade for different dates, across a large geographical area and create a feature layer of well pad detections that can be used for further analysis within ArcGIS. " |
| 614 | + "The code below shows how we can use distributed raster analytics to automate the detection of well pad for different dates, across a large geographical area and create a feature layer of well pad detections that can be used for further analysis within ArcGIS. " |
615 | 615 | ] |
616 | 616 | }, |
617 | 617 | { |
|
713 | 713 | "name": "python", |
714 | 714 | "nbconvert_exporter": "python", |
715 | 715 | "pygments_lexer": "ipython3", |
716 | | - "version": "3.7.2" |
| 716 | + "version": "3.6.7" |
717 | 717 | }, |
718 | 718 | "toc": { |
719 | 719 | "base_numbering": 1, |
|
0 commit comments