Skip to content

Commit 8b4b3cd

Browse files
sandeep kumarSandeep Kumar
andauthored
upate plant species sample (Esri#831)
Co-authored-by: Sandeep Kumar <san10428@esri.com>
1 parent 33a8d73 commit 8b4b3cd

File tree

1 file changed

+51
-14
lines changed

1 file changed

+51
-14
lines changed

samples/04_gis_analysts_data_scientists/train_a_tensorflow-lite_model_for_identifying_plant_species.ipynb

Lines changed: 51 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@
1616
"* [Get the data for analysis](#Get-the-data-for-analysis)\n",
1717
"* [Train an image classification model](#Train-an-image-classification-model)\n",
1818
" * [Necessary imports](#Necessary-imports)\n",
19+
" * [Download Dataset](#Download-Dataset)\n",
20+
" * [Filter out non RGB Images](#Filter-out-non-RGB-Images)\n",
1921
" * [Prepare data](#Prepare-data)\n",
2022
" * [Visualize a few samples from your training data](#Visualize-a-few-samples-from-your-training-data)\n",
2123
" * [Load model architecture](#Load-model-architecture)\n",
@@ -149,19 +151,7 @@
149151
"cell_type": "markdown",
150152
"metadata": {},
151153
"source": [
152-
"### Prepare data"
153-
]
154-
},
155-
{
156-
"cell_type": "markdown",
157-
"metadata": {},
158-
"source": [
159-
"We will now use the `prepare_data()` function to apply various types of transformations and augmentations on the training data. These augmentations enable us to train a better model with limited data and also prevent the model from overfitting. \n",
160-
"\n",
161-
"Here, we are passing 3 parameters to the `prepare_data()` function.\n",
162-
" - `path`: path of folder containing training data.\n",
163-
" - `chip_size`: Same as per specified while exporting training data.\n",
164-
" - `batch_size`: No. of images your model will train on each step inside an epoch, it directly depends on the memory of your graphic card and the type of model which you are working with. For this sample, a batch size of 64 worked for us on a GPU with 11GB memory."
154+
"### Download Dataset"
165155
]
166156
},
167157
{
@@ -177,7 +167,7 @@
177167
"cell_type": "code",
178168
"execution_count": 3,
179169
"metadata": {
180-
"scrolled": true
170+
"scrolled": false
181171
},
182172
"outputs": [
183173
{
@@ -243,6 +233,53 @@
243233
"data_path = Path(os.path.join(filepath.split('.')[0]))"
244234
]
245235
},
236+
{
237+
"cell_type": "markdown",
238+
"metadata": {},
239+
"source": [
240+
"### Filter out non RGB Images"
241+
]
242+
},
243+
{
244+
"cell_type": "code",
245+
"execution_count": 7,
246+
"metadata": {},
247+
"outputs": [],
248+
"source": [
249+
"from glob import glob\n",
250+
"from PIL import Image"
251+
]
252+
},
253+
{
254+
"cell_type": "code",
255+
"execution_count": 8,
256+
"metadata": {},
257+
"outputs": [],
258+
"source": [
259+
"for image_filepath in glob(os.path.join(data_path, 'images', '**','*.jpg')):\n",
260+
" if Image.open(image_filepath).mode != 'RGB':\n",
261+
" os.remove(image_filepath)"
262+
]
263+
},
264+
{
265+
"cell_type": "markdown",
266+
"metadata": {},
267+
"source": [
268+
"### Prepare data"
269+
]
270+
},
271+
{
272+
"cell_type": "markdown",
273+
"metadata": {},
274+
"source": [
275+
"We will now use the `prepare_data()` function to apply various types of transformations and augmentations on the training data. These augmentations enable us to train a better model with limited data and also prevent the model from overfitting. \n",
276+
"\n",
277+
"Here, we are passing 3 parameters to the `prepare_data()` function.\n",
278+
" - `path`: path of folder containing training data.\n",
279+
" - `chip_size`: Same as per specified while exporting training data.\n",
280+
" - `batch_size`: No. of images your model will train on each step inside an epoch, it directly depends on the memory of your graphic card and the type of model which you are working with. For this sample, a batch size of 64 worked for us on a GPU with 11GB memory."
281+
]
282+
},
246283
{
247284
"cell_type": "code",
248285
"execution_count": 3,

0 commit comments

Comments
 (0)