forked from keras-team/keras-io
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Introduce keras-io documentation for KerasCV (keras-team#852)
* begin creating KerasCV documentation * Add citation blurb * Add first guide * Adds a first guide for KerasCV * Add API Docs * Update w/ Sayak Comments * Add COCO metrics guide * Update guides_master to include COCO metrics guide * Fix imports * Reformat -> fix lint errors * Fix issue with batching * Update to remove tf.function * Remove file * Make rescaling occur inside model * Copyedits * Re-add latest changes * Update guides per comments * Run black * Update per matt comments * Add COCO metrics guide * Reformat -> fix lint errors * Add an example on distilling ViTs through attention (keras-team#849) * feat: add an example on distillation of vits. * chore: wget and link on the same line. * chore: applied black formatting. * chore: removed trailing spaces. * chore: pr feedback round I. * chore: applied black formatting with ver 22.1.0. * Minor copyedits * chore: added the generated files. Co-authored-by: François Chollet <francois.chollet@gmail.com> * added object detection with vision transformer py file (keras-team#842) * added object detection with vision transformer py file * minor issue fixed, make sure paths are sorted while loading dataset, label iteration fixed while evaluation * code refactored, compatible with colab, used utils.img_to_array, removed tensorboard to keep things simple, removed unused code. * md link on single line * Updated text. * removed indentation from opening para * removed indentation * Some styling nits addressed, direct import for frequently used modules. Loc = 300. * added link * black test failing resolved * Copyedits * Added generated files. Co-authored-by: François Chollet <francois.chollet@gmail.com> * Make rescaling occur inside model * Copyedits * Re-add latest changes * Update guides per comments * Run black * Update guide to add `augmentations_per_image` from `num_layers` * Update ecosystem * Update guide * Add FourierMix * Update cutmix and mixup per Elie's feedback * Add repeat() calls * Updates to RandAugment * Fix lambda * remove broken link * Address Sayak comments * Remove unneeded parens * Uncapitolize * Remove duplicate guide * Grammar changes * checkout origin/master for spurious changes * Remove duplciated TF cloud section * Add guides for KerasCV * Update broken guide links * Update broken guide link * Fix broekn relation cells * add genfiles * Force introduce the templates/guides/keras_cv dir Co-authored-by: François Chollet <francois.chollet@gmail.com> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> Co-authored-by: Karan Dave <karandave10.kd@gmail.com>
- Loading branch information
1 parent
a68570b
commit 2f5ac74
Showing
35 changed files
with
4,606 additions
and
9 deletions.
There are no files selected for viewing
Binary file added
BIN
+174 KB
...es/img/cut_mix_mix_up_and_rand_augment/cut_mix_mix_up_and_rand_augment_15_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+194 KB
...es/img/cut_mix_mix_up_and_rand_augment/cut_mix_mix_up_and_rand_augment_18_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+174 KB
...s/img/cut_mix_mix_up_and_rand_augment/cut_mix_mix_up_and_rand_augment_30_12.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+159 KB
...es/img/cut_mix_mix_up_and_rand_augment/cut_mix_mix_up_and_rand_augment_34_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+192 KB
...s/img/cut_mix_mix_up_and_rand_augment/cut_mix_mix_up_and_rand_augment_37_12.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+193 KB
guides/img/cut_mix_mix_up_and_rand_augment/cut_mix_mix_up_and_rand_augment_9_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+95 KB
...yer/writing_custom_image_augmentations_with_baseimageaugmentationlayer_11_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+95 KB
...yer/writing_custom_image_augmentations_with_baseimageaugmentationlayer_13_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+38.2 KB
...yer/writing_custom_image_augmentations_with_baseimageaugmentationlayer_17_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+36.9 KB
...yer/writing_custom_image_augmentations_with_baseimageaugmentationlayer_19_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+70 KB
...yer/writing_custom_image_augmentations_with_baseimageaugmentationlayer_27_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+96.3 KB
...yer/writing_custom_image_augmentations_with_baseimageaugmentationlayer_29_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+97 KB
...ayer/writing_custom_image_augmentations_with_baseimageaugmentationlayer_9_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Large diffs are not rendered by default.
Oops, something went wrong.
710 changes: 710 additions & 0 deletions
710
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
Large diffs are not rendered by default.
Oops, something went wrong.
713 changes: 713 additions & 0 deletions
713
...s/ipynb/keras_cv/writing_custom_image_augmentations_with_baseimageaugmentationlayer.ipynb
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,226 @@ | ||
""" | ||
Title: Using KerasCV COCO Metrics | ||
Author: [lukewood](https://lukewood.xyz) | ||
Date created: 2022/04/13 | ||
Last modified: 2022/04/13 | ||
Description: Use KerasCV COCO metrics to evaluate object detection models. | ||
""" | ||
|
||
""" | ||
## Overview | ||
With KerasCV's COCO metrics implementation, you can easily evaluate your object | ||
detection model's performance all from within the TensorFlow graph. This guide | ||
shows you how to use KerasCV's COCO metrics and integrate it into your own model | ||
evaluation pipeline. Historically, users have evaluated COCO metrics as a post training | ||
step. KerasCV offers an in graph implementation of COCO metrics, enabling users to | ||
evaluate COCO metrics *during* training! | ||
Let's get started using KerasCV's COCO metrics. | ||
""" | ||
|
||
""" | ||
## Input format | ||
KerasCV COCO metrics require a specific input format. | ||
The metrics expect `y_true` and be a `float` Tensor with the shape `[batch, | ||
num_images, num_boxes, 5]`. The final axis stores the locational and class | ||
information for each specific bounding box. The dimensions in order are: `[left, | ||
top, right, bottom, class]`. | ||
The metrics expect `y_pred` and be a `float` Tensor with the shape `[batch, | ||
num_images, num_boxes, 56]`. The final axis stores the locational and class | ||
information for each specific bounding box. The dimensions in order are: `[left, | ||
top, right, bottom, class, confidence]`. | ||
Due to the fact that each image may have a different number of bounding boxes, | ||
the `num_boxes` dimension may actually have a mismatching shape between images. | ||
KerasCV works around this by allowing you to either pass a `RaggedTensor` as an | ||
input to the KerasCV COCO metrics, or padding unused bounding boxes with `-1`. | ||
Utility functions to manipulate bounding boxes, transform between formats, and | ||
pad bounding box Tensors with `-1s` are available at | ||
[`keras_cv.utils.bounding_box`](https://github.com/keras-team/keras-cv/blob/master/keras_cv/utils/bounding_box.py). | ||
""" | ||
|
||
""" | ||
## Independent metric use | ||
The usage first pattern for KerasCV COCO metrics is to manually call | ||
`update_state()` and `result()` methods. This pattern is recommended for users | ||
who want finer grained control of their metric evaluation, or want to use a | ||
different format for `y_pred` in their model. | ||
Let's run through a quick code example. | ||
""" | ||
|
||
""" | ||
1.) First, we must construct our metric: | ||
""" | ||
|
||
import keras_cv | ||
|
||
# import all modules we will need in this example | ||
import tensorflow as tf | ||
from tensorflow import keras | ||
|
||
# only consider boxes with areas less than a 32x32 square. | ||
metric = keras_cv.metrics.COCORecall(class_ids=[1, 2, 3], area_range=(0, 32**2)) | ||
|
||
""" | ||
2.) Create Some Bounding Boxes: | ||
""" | ||
|
||
y_true = tf.ragged.stack( | ||
[ | ||
# image 1 | ||
tf.constant([[0, 0, 10, 10, 1], [11, 12, 30, 30, 2]], tf.float32), | ||
# image 2 | ||
tf.constant([[0, 0, 10, 10, 1]], tf.float32), | ||
] | ||
) | ||
y_pred = tf.ragged.stack( | ||
[ | ||
# predictions for image 1 | ||
tf.constant([[5, 5, 10, 10, 1, 0.9]], tf.float32), | ||
# predictions for image 2 | ||
tf.constant([[0, 0, 10, 10, 1, 1.0], [5, 5, 10, 10, 1, 0.9]], tf.float32), | ||
] | ||
) | ||
|
||
""" | ||
3.) Update metric state: | ||
""" | ||
|
||
metric.update_state(y_true, y_pred) | ||
|
||
""" | ||
4.) Evaluate the result: | ||
""" | ||
|
||
metric.result() | ||
|
||
""" | ||
Evaluating COCORecall for your object detection model is as simple as that! | ||
""" | ||
|
||
""" | ||
## Metric use in a model | ||
You can also leverage COCORecall in your model's training loop. Let's walk through this | ||
process. | ||
1.) Construct your the metric and a dummy model | ||
""" | ||
|
||
i = keras.layers.Input((None, None, 6)) | ||
model = keras.Model(i, i) | ||
|
||
""" | ||
2.) Create some fake bounding boxes: | ||
""" | ||
|
||
y_true = tf.constant([[[0, 0, 10, 10, 1], [5, 5, 10, 10, 1]]], tf.float32) | ||
y_pred = tf.constant([[[0, 0, 10, 10, 1, 1.0], [5, 5, 10, 10, 1, 0.9]]], tf.float32) | ||
|
||
""" | ||
3.) Create the metric and compile the model | ||
""" | ||
|
||
recall = keras_cv.metrics.COCORecall( | ||
max_detections=100, class_ids=[1], area_range=(0, 64**2), name="coco_recall" | ||
) | ||
model.compile(metrics=[recall]) | ||
|
||
""" | ||
4.) Use `model.evaluate()` to evaluate the metric | ||
""" | ||
|
||
model.evaluate(y_pred, y_true, return_dict=True) | ||
|
||
""" | ||
Looks great! That's all it takes to use KerasCV's COCO metrics to evaluate object | ||
detection models. | ||
""" | ||
|
||
""" | ||
## Supported constructor parameters | ||
KerasCV COCO Metrics are sufficiently parameterized to support all of the | ||
permutations evaluated in the original COCO challenge, all metrics evaluated in | ||
the accompanying `pycocotools` library, and more! | ||
### COCORecall | ||
The COCORecall constructor supports the following parameters | ||
| Name | Usage | | ||
| --------------- | ---------------------------------------------------------- | | ||
| iou\_thresholds | iou\_thresholds expects an iterable. This value is used as | | ||
: : a cutoff to determine the minimum intersection of unions : | ||
: : required for a classification sample to be considered a : | ||
: : true positive. If an iterable is passed, the result is the : | ||
: : average across IoU values passed in the : | ||
: : iterable.<br>Defaults to `range(0.5, 0.95, incr=0.05)` : | ||
| area\_range | area\_range specifies a range over which to evaluate the | | ||
: : metric. Only ground truth objects within the area\_range : | ||
: : are considered in the scoring.<br>Defaults to\: `\[0, : | ||
: : 1e5\*\*2\]` : | ||
| max\_detections | max\_detections is a value specifying the max number of | | ||
: : detections a model is allowed to make.<br>Defaults to\: : | ||
: : `100` : | ||
| class\_ids | When class\_ids is not None, the metric will only consider | | ||
: : boxes of the matching class label. This is useful when a : | ||
: : specific class is considered high priority. An example of : | ||
: : this would be providing the human and animal class indices : | ||
: : in the case of self driving cars.<br>To evaluate all : | ||
: : categories, users will pass `range(0, num\_classes)`. : | ||
### COCOMeanAveragePrecision | ||
The COCOMeanAveragePrecision constructor supports the following parameters | ||
| Name | Usage | | ||
| ------------------ | ------------------------------------------------------- | | ||
| \*\*kwargs | Passed to COCOBase.super() | | ||
| recall\_thresholds | recall\_thresholds is a list containing the | | ||
: : recall\_thresholds over which to consider in the : | ||
: : computation of MeanAveragePrecision. : | ||
| iou\_thresholds | iou\_thresholds expects an iterable. This value is used | | ||
: : as a cutoff to determine the minimum intersection of : | ||
: : unions required for a classification sample to be : | ||
: : considered a true positive. If an iterable is passed, : | ||
: : the result is the average across IoU values passed in : | ||
: : the iterable.<br>Defaults to `range(0.5, 0.95, : | ||
: : incr=0.05)` : | ||
| area\_range | area\_range specifies a range over which to evaluate | | ||
: : the metric. Only ground truth objects within the : | ||
: : area\_range are considered in the : | ||
: : scoring.<br><br>Defaults to\: `\[0, 1e5\*\*2\]` : | ||
| max\_detections | max\_detections is a value specifying the max number of | | ||
: : detections a model is allowed to make.<br><br>Defaults : | ||
: : to\: `100` : | ||
| class\_ids | When class\_ids is not None, the metric will only | | ||
: : consider boxes of the matching class label. This is : | ||
: : useful when a specific class is considered high : | ||
: : priority. An example of this would be providing the : | ||
: : human and animal class indices in the case of self : | ||
: : driving cars.<br>To evaluate all categories, users will : | ||
: : pass `range(0, num\_classes)`. : | ||
""" | ||
|
||
""" | ||
## Conclusion & next steps | ||
KerasCV makes it easier than ever before to evaluate a Keras object detection model. | ||
Historically, users had to perform post training evaluation. With KerasCV, you can | ||
perform train time evaluation to see how these metrics evolve over time! | ||
As an additional exercise for readers, you can: | ||
- Configure `iou_thresholds`, `max_detections`, and `area_range` to reproduce the suite | ||
of metrics evaluted in `pycocotools` | ||
- Integrate COCO metrics into a RetinaNet using the | ||
[keras.io RetinaNet example](https://keras.io/examples/vision/retinanet/) | ||
""" |
Oops, something went wrong.