Skip to content

Commit

Permalink
update built in methods training guide
Browse files Browse the repository at this point in the history
  • Loading branch information
fchollet committed Apr 8, 2021
1 parent 5442477 commit 62f2149
Show file tree
Hide file tree
Showing 3 changed files with 166 additions and 145 deletions.
67 changes: 35 additions & 32 deletions guides/ipynb/training_with_built_in_methods.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -45,12 +45,12 @@
"## Introduction\n",
"\n",
"This guide covers training, evaluation, and prediction (inference) models\n",
"when using built-in APIs for training & validation (such as `model.fit()`,\n",
"`model.evaluate()`, `model.predict()`).\n",
"when using built-in APIs for training & validation (such as `Model.fit()`,\n",
"`Model.evaluate()` and `Model.predict()`).\n",
"\n",
"If you are interested in leveraging `fit()` while specifying your\n",
"own training step function, see the guide\n",
"[\"customizing what happens in `fit()`\"](/guides/customizing_what_happens_in_fit/).\n",
"own training step function, see the\n",
"[Customizing what happens in `fit()` guide](/guides/customizing_what_happens_in_fit/).\n",
"\n",
"If you are interested in writing your own training & evaluation loops from\n",
"scratch, see the guide\n",
Expand All @@ -61,8 +61,8 @@
"Sequential models, models built with the Functional API, and models written from\n",
"scratch via model subclassing.\n",
"\n",
"This guide doesn't cover distributed training. For distributed training, see\n",
"our [guide to multi-gpu & distributed training](/guides/distributed_training/)."
"This guide doesn't cover distributed training, which is covered in our\n",
"[guide to multi-GPU & distributed training](https://keras.io/guides/distributed_training/)."
]
},
{
Expand Down Expand Up @@ -170,8 +170,8 @@
},
"source": [
"We call `fit()`, which will train the model by slicing the data into \"batches\" of size\n",
"\"batch_size\", and repeatedly iterating over the entire dataset for a given number of\n",
"\"epochs\"."
"`batch_size`, and repeatedly iterating over the entire dataset for a given number of\n",
"`epochs`."
]
},
{
Expand Down Expand Up @@ -201,7 +201,7 @@
"colab_type": "text"
},
"source": [
"The returned \"history\" object holds a record of the loss values and metric values\n",
"The returned `history` object holds a record of the loss values and metric values\n",
"during training:"
]
},
Expand Down Expand Up @@ -293,8 +293,8 @@
"\n",
"If your model has multiple outputs, you can specify different losses and metrics for\n",
"each output, and you can modulate the contribution of each output to the total loss of\n",
"the model. You will find more details about this in the section **\"Passing data to\n",
"multi-input, multi-output models\"**.\n",
"the model. You will find more details about this in the **Passing data to multi-input,\n",
"multi-output models** section.\n",
"\n",
"Note that if you're satisfied with the default settings, in many cases the optimizer,\n",
"loss, and metrics can be specified via string identifiers as a shortcut:"
Expand Down Expand Up @@ -362,8 +362,8 @@
"source": [
"### Many built-in optimizers, losses, and metrics are available\n",
"\n",
"In general, you won't have to create from scratch your own losses, metrics, or\n",
"optimizers, because what you need is likely already part of the Keras API:\n",
"In general, you won't have to create your own losses, metrics, or optimizers\n",
"from scratch, because what you need is likely to be already part of the Keras API:\n",
"\n",
"Optimizers:\n",
"\n",
Expand Down Expand Up @@ -395,10 +395,11 @@
"source": [
"### Custom losses\n",
"\n",
"There are two ways to provide custom losses with Keras. The first example creates a\n",
"function that accepts inputs `y_true` and `y_pred`. The following example shows a loss\n",
"function that computes the mean squared error between the real data and the\n",
"predictions:"
"If you need to create a custom loss, Keras provides two ways to do so.\n",
"\n",
"The first method involves creating a function that accepts inputs `y_true` and\n",
"`y_pred`. The following example shows a loss function that computes the mean squared\n",
"error between the real data and the predictions:"
]
},
{
Expand Down Expand Up @@ -490,10 +491,10 @@
"- `reset_states(self)`, which reinitializes the state of the metric.\n",
"\n",
"State update and results computation are kept separate (in `update_state()` and\n",
"`result()`, respectively) because in some cases, results computation might be very\n",
"expensive, and would only be done periodically.\n",
"`result()`, respectively) because in some cases, the results computation might be very\n",
"expensive and would only be done periodically.\n",
"\n",
"Here's a simple example showing how to implement a `CategoricalTruePositives` metric,\n",
"Here's a simple example showing how to implement a `CategoricalTruePositives` metric\n",
"that counts how many samples were correctly classified as belonging to a given class:"
]
},
Expand Down Expand Up @@ -546,7 +547,7 @@
"### Handling losses and metrics that don't fit the standard signature\n",
"\n",
"The overwhelming majority of losses and metrics can be computed from `y_true` and\n",
"`y_pred`, where `y_pred` is an output of your model. But not all of them. For\n",
"`y_pred`, where `y_pred` is an output of your model -- but not all of them. For\n",
"instance, a regularization loss may only require the activation of a layer (there are\n",
"no targets in this case), and this activation may not be a model output.\n",
"\n",
Expand Down Expand Up @@ -787,7 +788,7 @@
"validation\".\n",
"\n",
"The way the validation is computed is by taking the last x% samples of the arrays\n",
"received by the fit call, before any shuffling.\n",
"received by the `fit()` call, before any shuffling.\n",
"\n",
"Note that you can only use `validation_split` when training with NumPy data."
]
Expand All @@ -814,7 +815,7 @@
"\n",
"In the past few paragraphs, you've seen how to handle losses, metrics, and optimizers,\n",
"and you've seen how to use the `validation_data` and `validation_split` arguments in\n",
"fit, when your data is passed as NumPy arrays.\n",
"`fit()`, when your data is passed as NumPy arrays.\n",
"\n",
"Let's now take a look at the case where your data comes in the form of a\n",
"`tf.data.Dataset` object.\n",
Expand Down Expand Up @@ -1217,7 +1218,7 @@
"about models that have multiple inputs or outputs?\n",
"\n",
"Consider the following model, which has an image input of shape `(32, 32, 3)` (that's\n",
"`(height, width, channels)`) and a timeseries input of shape `(None, 10)` (that's\n",
"`(height, width, channels)`) and a time series input of shape `(None, 10)` (that's\n",
"`(timesteps, features)`). Our model will have two outputs computed from the\n",
"combination of these inputs: a \"score\" (of shape `(1,)`) and a probability\n",
"distribution over five classes (of shape `(5,)`)."
Expand Down Expand Up @@ -1406,8 +1407,8 @@
"colab_type": "text"
},
"source": [
"You could also chose not to compute a loss for certain outputs, if these outputs meant\n",
"for prediction but not for training:"
"You could also choose not to compute a loss for certain outputs, if these outputs are\n",
"meant for prediction but not for training:"
]
},
{
Expand Down Expand Up @@ -1437,7 +1438,7 @@
"colab_type": "text"
},
"source": [
"Passing data to a multi-input or multi-output model in fit works in a similar way as\n",
"Passing data to a multi-input or multi-output model in `fit()` works in a similar way as\n",
"specifying a loss function in compile: you can pass **lists of NumPy arrays** (with\n",
"1:1 mapping to the outputs that received a loss function) or **dicts mapping output\n",
"names to NumPy arrays**."
Expand Down Expand Up @@ -1512,8 +1513,8 @@
"## Using callbacks\n",
"\n",
"Callbacks in Keras are objects that are called at different points during training (at\n",
"the start of an epoch, at the end of a batch, at the end of an epoch, etc.) and which\n",
"can be used to implement behaviors such as:\n",
"the start of an epoch, at the end of a batch, at the end of an epoch, etc.). They\n",
"can be used to implement certain behaviors, such as:\n",
"\n",
"- Doing validation at different points during training (beyond the built-in per-epoch\n",
"validation)\n",
Expand Down Expand Up @@ -1567,6 +1568,8 @@
"source": [
"### Many built-in callbacks are available\n",
"\n",
"There are many built-in callbacks already available in Keras, such as:\n",
"\n",
"- `ModelCheckpoint`: Periodically save the model.\n",
"- `EarlyStopping`: Stop training when training is no longer improving the validation\n",
"metrics.\n",
Expand Down Expand Up @@ -1761,7 +1764,7 @@
"### Using callbacks to implement a dynamic learning rate schedule\n",
"\n",
"A dynamic learning rate schedule (for instance, decreasing the learning rate when the\n",
"validation loss is no longer improving) cannot be achieved with these schedule objects\n",
"validation loss is no longer improving) cannot be achieved with these schedule objects,\n",
"since the optimizer does not have access to validation metrics.\n",
"\n",
"However, callbacks do have access to all metrics, including validation metrics! You can\n",
Expand All @@ -1778,7 +1781,7 @@
"## Visualizing loss and metrics during training\n",
"\n",
"The best way to keep an eye on your model during training is to use\n",
"[TensorBoard](https://www.tensorflow.org/tensorboard), a browser-based application\n",
"[TensorBoard](https://www.tensorflow.org/tensorboard) -- a browser-based application\n",
"that you can run locally that provides you with:\n",
"\n",
"- Live plots of the loss and metrics for training and evaluation\n",
Expand All @@ -1802,7 +1805,7 @@
"source": [
"### Using the TensorBoard callback\n",
"\n",
"The easiest way to use TensorBoard with a Keras model and the fit method is the\n",
"The easiest way to use TensorBoard with a Keras model and the `fit()` method is the\n",
"`TensorBoard` callback.\n",
"\n",
"In the simplest case, just specify where you want the callback to write logs, and\n",
Expand Down
Loading

0 comments on commit 62f2149

Please sign in to comment.