Skip to content

Commit

Permalink
chore: Update documentation and link to the DataGenerator tutorial
Browse files Browse the repository at this point in the history
  • Loading branch information
pierluigiferrari committed Mar 31, 2018
1 parent 977e508 commit fdbaa02
Show file tree
Hide file tree
Showing 7 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion ssd300_evaluation_COCO.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@
"source": [
"## 2. Create a data generator for the evaluation dataset\n",
"\n",
"Instantiate a `BatchGenerator` that will serve the evaluation dataset during the prediction phase."
"Instantiate a `DataGenerator` that will serve the evaluation dataset during the prediction phase."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion ssd300_evaluation_Pascal_VOC.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@
"source": [
"## 2. Create a data generator for the evaluation dataset\n",
"\n",
"Instantiate a `BatchGenerator` that will serve the evaluation dataset during the prediction phase."
"Instantiate a `DataGenerator` that will serve the evaluation dataset during the prediction phase."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion ssd300_inference.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -304,7 +304,7 @@
"source": [
"## 5. Make predictions on Pascal VOC 2007 Test\n",
"\n",
"Let's use the batch generator to make predictions on the Pascal VOC 2007 test dataset and visualize the predicted boxes alongside the ground truth boxes for comparison."
"Let's use a `DataGenerator` to make predictions on the Pascal VOC 2007 test dataset and visualize the predicted boxes alongside the ground truth boxes for comparison. Everything here is preset already, but if you'd like to learn more about the data generator and its capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion ssd300_training.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@
"\n",
"The original implementation uses a batch size of 32 for training, but you might have to use a smaller batch size depending on your GPU memory.\n",
"\n",
"The `DataGenerator` itself is fairly generic. I doesn't contain any data augmentation or bounding box encoding logic. Instead, you pass a list of image transformations and an encoder for the bounding boxes in the `transformations` and `label_encoder` arguments of the data generator's `generate()` method, and the data generator will then apply those given transformations and the encoding to the data.\n",
"The `DataGenerator` itself is fairly generic. I doesn't contain any data augmentation or bounding box encoding logic. Instead, you pass a list of image transformations and an encoder for the bounding boxes in the `transformations` and `label_encoder` arguments of the data generator's `generate()` method, and the data generator will then apply those given transformations and the encoding to the data. Everything here is preset already, but if you'd like to learn more about the data generator and its data augmentation capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository.\n",
"\n",
"The data augmentation settings defined further down reproduce the data augmentation pipeline of the original training. The training generator receives an object `ssd_data_augmentation`, which is a transformation that is itself composed of a whole chain of transformations that replicate the data augmentation used to train the original Caffe implementation. The validation generator receives an object `resize`, which simply resizes the input images.\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion ssd512_inference.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@
"source": [
"## 5. Make predictions on Pascal VOC 2007 Test\n",
"\n",
"Let's use the batch generator to make predictions on the Pascal VOC 2007 test dataset and visualize the predicted boxes alongside the ground truth boxes for comparison."
"Let's use a `DataGenerator` to make predictions on the Pascal VOC 2007 test dataset and visualize the predicted boxes alongside the ground truth boxes for comparison. Everything here is preset already, but if you'd like to learn more about the data generator and its capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion ssd7_training.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@
"\n",
"Set the batch size to to your preference and to what your GPU memory allows, it's not the most important hyperparameter. The Caffe implementation uses a batch size of 32, but smaller batch sizes work fine, too.\n",
"\n",
"The `DataGenerator` itself is fairly generic. I doesn't contain any data augmentation or bounding box encoding logic. Instead, you pass a list of image transformations and an encoder for the bounding boxes in the `transformations` and `label_encoder` arguments of the data generator's `generate()` method, and the data generator will then apply those given transformations and the encoding to the data.\n",
"The `DataGenerator` itself is fairly generic. I doesn't contain any data augmentation or bounding box encoding logic. Instead, you pass a list of image transformations and an encoder for the bounding boxes in the `transformations` and `label_encoder` arguments of the data generator's `generate()` method, and the data generator will then apply those given transformations and the encoding to the data. Everything here is preset already, but if you'd like to learn more about the data generator and its data augmentation capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository.\n",
"\n",
"The image processing chain defined further down in the object named `data_augmentation_chain` is just one possibility of what a data augmentation pipeline for unform-size images could look like. Feel free to put together other image processing chains, you can use the `DataAugmentationConstantInputSize` class as a template. Or you could use the original SSD data augmentation pipeline by instantiting an `SSDDataAugmentation` object and passing that to the generator instead. This procedure is not exactly efficient, but it evidently produces good results on multiple datasets.\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion weight_sampling_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -543,7 +543,7 @@
"source": [
"### 5.3. Load some images to test our model on\n",
"\n",
"We sub-sampled some of the road traffic categories from the trained SSD300 MS COCO weights, so let's try out our model on a few road traffic images. The Udacity road traffic dataset linked to in the `ssd7_training.ipynb` notebook lends itself to this task. Let's instantiate a `BatchGenerator` and load the Udacity dataset."
"We sub-sampled some of the road traffic categories from the trained SSD300 MS COCO weights, so let's try out our model on a few road traffic images. The Udacity road traffic dataset linked to in the `ssd7_training.ipynb` notebook lends itself to this task. Let's instantiate a `DataGenerator` and load the Udacity dataset. Everything here is preset already, but if you'd like to learn more about the data generator and its capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository."
]
},
{
Expand Down

0 comments on commit fdbaa02

Please sign in to comment.