Skip to content

Commit

Permalink
Lint Keras.io Training & evaluation with the built-in methods (keras-…
Browse files Browse the repository at this point in the history
  • Loading branch information
8bitmp3 authored Apr 8, 2021
1 parent 7165ea1 commit 5442477
Showing 1 changed file with 35 additions and 32 deletions.
67 changes: 35 additions & 32 deletions guides/training_with_built_in_methods.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@
## Introduction
This guide covers training, evaluation, and prediction (inference) models
when using built-in APIs for training & validation (such as `model.fit()`,
`model.evaluate()`, `model.predict()`).
when using built-in APIs for training & validation (such as `Model.fit()`,
`Model.evaluate()` and `Model.predict()`).
If you are interested in leveraging `fit()` while specifying your
own training step function, see the guide
["customizing what happens in `fit()`"](/guides/customizing_what_happens_in_fit/).
own training step function, see the
[Customizing what happens in `fit()` guide](/guides/customizing_what_happens_in_fit/).
If you are interested in writing your own training & evaluation loops from
scratch, see the guide
Expand All @@ -35,8 +35,8 @@
Sequential models, models built with the Functional API, and models written from
scratch via model subclassing.
This guide doesn't cover distributed training. For distributed training, see
our [guide to multi-gpu & distributed training](https://keras.io/guides/distributed_training/).
This guide doesn't cover distributed training, which is covered in our
[guide to multi-GPU & distributed training](https://keras.io/guides/distributed_training/).
"""

"""
Expand Down Expand Up @@ -97,8 +97,8 @@

"""
We call `fit()`, which will train the model by slicing the data into "batches" of size
"batch_size", and repeatedly iterating over the entire dataset for a given number of
"epochs".
`batch_size`, and repeatedly iterating over the entire dataset for a given number of
`epochs`.
"""

print("Fit model on training data")
Expand All @@ -114,7 +114,7 @@
)

"""
The returned "history" object holds a record of the loss values and metric values
The returned `history` object holds a record of the loss values and metric values
during training:
"""

Expand Down Expand Up @@ -159,8 +159,8 @@
If your model has multiple outputs, you can specify different losses and metrics for
each output, and you can modulate the contribution of each output to the total loss of
the model. You will find more details about this in the section **"Passing data to
multi-input, multi-output models"**.
the model. You will find more details about this in the **Passing data to multi-input,
multi-output models** section.
Note that if you're satisfied with the default settings, in many cases the optimizer,
loss, and metrics can be specified via string identifiers as a shortcut:
Expand Down Expand Up @@ -200,8 +200,8 @@ def get_compiled_model():
"""
### Many built-in optimizers, losses, and metrics are available
In general, you won't have to create from scratch your own losses, metrics, or
optimizers, because what you need is likely already part of the Keras API:
In general, you won't have to create your own losses, metrics, or optimizers
from scratch, because what you need is likely to be already part of the Keras API:
Optimizers:
Expand All @@ -228,10 +228,11 @@ def get_compiled_model():
"""
### Custom losses
There are two ways to provide custom losses with Keras. The first example creates a
function that accepts inputs `y_true` and `y_pred`. The following example shows a loss
function that computes the mean squared error between the real data and the
predictions:
If you need to create a custom loss, Keras provides two ways to do so.
The first method involves creating a function that accepts inputs `y_true` and
`y_pred`. The following example shows a loss function that computes the mean squared
error between the real data and the predictions:
"""


Expand Down Expand Up @@ -295,10 +296,10 @@ def call(self, y_true, y_pred):
- `reset_states(self)`, which reinitializes the state of the metric.
State update and results computation are kept separate (in `update_state()` and
`result()`, respectively) because in some cases, results computation might be very
expensive, and would only be done periodically.
`result()`, respectively) because in some cases, the results computation might be very
expensive and would only be done periodically.
Here's a simple example showing how to implement a `CategoricalTruePositives` metric,
Here's a simple example showing how to implement a `CategoricalTruePositives` metric
that counts how many samples were correctly classified as belonging to a given class:
"""

Expand Down Expand Up @@ -337,7 +338,7 @@ def reset_states(self):
### Handling losses and metrics that don't fit the standard signature
The overwhelming majority of losses and metrics can be computed from `y_true` and
`y_pred`, where `y_pred` is an output of your model. But not all of them. For
`y_pred`, where `y_pred` is an output of your model -- but not all of them. For
instance, a regularization loss may only require the activation of a layer (there are
no targets in this case), and this activation may not be a model output.
Expand Down Expand Up @@ -503,7 +504,7 @@ def call(self, targets, logits, sample_weights=None):
validation".
The way the validation is computed is by taking the last x% samples of the arrays
received by the fit call, before any shuffling.
received by the `fit()` call, before any shuffling.
Note that you can only use `validation_split` when training with NumPy data.
"""
Expand All @@ -516,7 +517,7 @@ def call(self, targets, logits, sample_weights=None):
In the past few paragraphs, you've seen how to handle losses, metrics, and optimizers,
and you've seen how to use the `validation_data` and `validation_split` arguments in
fit, when your data is passed as NumPy arrays.
`fit()`, when your data is passed as NumPy arrays.
Let's now take a look at the case where your data comes in the form of a
`tf.data.Dataset` object.
Expand Down Expand Up @@ -802,7 +803,7 @@ def __getitem__(self, idx):
about models that have multiple inputs or outputs?
Consider the following model, which has an image input of shape `(32, 32, 3)` (that's
`(height, width, channels)`) and a timeseries input of shape `(None, 10)` (that's
`(height, width, channels)`) and a time series input of shape `(None, 10)` (that's
`(timesteps, features)`). Our model will have two outputs computed from the
combination of these inputs: a "score" (of shape `(1,)`) and a probability
distribution over five classes (of shape `(5,)`).
Expand Down Expand Up @@ -907,8 +908,8 @@ def __getitem__(self, idx):
)

"""
You could also chose not to compute a loss for certain outputs, if these outputs meant
for prediction but not for training:
You could also choose not to compute a loss for certain outputs, if these outputs are
meant for prediction but not for training:
"""

# List loss version
Expand All @@ -924,7 +925,7 @@ def __getitem__(self, idx):
)

"""
Passing data to a multi-input or multi-output model in fit works in a similar way as
Passing data to a multi-input or multi-output model in `fit()` works in a similar way as
specifying a loss function in compile: you can pass **lists of NumPy arrays** (with
1:1 mapping to the outputs that received a loss function) or **dicts mapping output
names to NumPy arrays**.
Expand Down Expand Up @@ -971,8 +972,8 @@ def __getitem__(self, idx):
## Using callbacks
Callbacks in Keras are objects that are called at different points during training (at
the start of an epoch, at the end of a batch, at the end of an epoch, etc.) and which
can be used to implement behaviors such as:
the start of an epoch, at the end of a batch, at the end of an epoch, etc.). They
can be used to implement certain behaviors, such as:
- Doing validation at different points during training (beyond the built-in per-epoch
validation)
Expand Down Expand Up @@ -1012,6 +1013,8 @@ def __getitem__(self, idx):
"""
### Many built-in callbacks are available
There are many built-in callbacks already available in Keras, such as:
- `ModelCheckpoint`: Periodically save the model.
- `EarlyStopping`: Stop training when training is no longer improving the validation
metrics.
Expand Down Expand Up @@ -1145,7 +1148,7 @@ def make_or_restore_model():
### Using callbacks to implement a dynamic learning rate schedule
A dynamic learning rate schedule (for instance, decreasing the learning rate when the
validation loss is no longer improving) cannot be achieved with these schedule objects
validation loss is no longer improving) cannot be achieved with these schedule objects,
since the optimizer does not have access to validation metrics.
However, callbacks do have access to all metrics, including validation metrics! You can
Expand All @@ -1157,7 +1160,7 @@ def make_or_restore_model():
## Visualizing loss and metrics during training
The best way to keep an eye on your model during training is to use
[TensorBoard](https://www.tensorflow.org/tensorboard), a browser-based application
[TensorBoard](https://www.tensorflow.org/tensorboard) -- a browser-based application
that you can run locally that provides you with:
- Live plots of the loss and metrics for training and evaluation
Expand All @@ -1176,7 +1179,7 @@ def make_or_restore_model():
"""
### Using the TensorBoard callback
The easiest way to use TensorBoard with a Keras model and the fit method is the
The easiest way to use TensorBoard with a Keras model and the `fit()` method is the
`TensorBoard` callback.
In the simplest case, just specify where you want the callback to write logs, and
Expand Down

0 comments on commit 5442477

Please sign in to comment.