Skip to content

Commit

Permalink
Add the ability to generate TF-format guides (keras-team#87)
Browse files Browse the repository at this point in the history
* Add the ability to generate TF-format guides

* Add link conversion to tf notebooks

* Use fully-qualified links
  • Loading branch information
fchollet authored Jun 12, 2020
1 parent 6d27739 commit 979ac97
Show file tree
Hide file tree
Showing 38 changed files with 12,237 additions and 1,186 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
41 changes: 21 additions & 20 deletions guides/ipynb/customizing_what_happens_in_fit.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
"API. You can do this whether you're building `Sequential` models, Functional API\n",
"models, or subclassed models.\n",
"\n",
"Let's see how that works.\n"
"Let's see how that works."
]
},
{
Expand All @@ -56,7 +56,8 @@
"colab_type": "text"
},
"source": [
"## Setup\n"
"## Setup\n",
"Requires TensorFlow 2.2 or later."
]
},
{
Expand All @@ -68,7 +69,7 @@
"outputs": [],
"source": [
"import tensorflow as tf\n",
"from tensorflow import keras\n"
"from tensorflow import keras"
]
},
{
Expand Down Expand Up @@ -100,7 +101,7 @@
"\n",
"Similarly, we call `self.compiled_metrics.update_state(y, y_pred)` to update the state\n",
"of the metrics that were passed in `compile()`, and we query results from\n",
"`self.metrics` at the end to retrieve their current value.\n"
"`self.metrics` at the end to retrieve their current value."
]
},
{
Expand Down Expand Up @@ -133,7 +134,7 @@
" self.compiled_metrics.update_state(y, y_pred)\n",
" # Return a dict mapping metric names to current value\n",
" return {m.name: m.result() for m in self.metrics}\n",
"\n"
""
]
},
{
Expand All @@ -142,7 +143,7 @@
"colab_type": "text"
},
"source": [
"Let's try this out:\n"
"Let's try this out:"
]
},
{
Expand All @@ -164,7 +165,7 @@
"# Just use `fit` as usual\n",
"x = np.random.random((1000, 32))\n",
"y = np.random.random((1000, 1))\n",
"model.fit(x, y, epochs=3)\n"
"model.fit(x, y, epochs=3)"
]
},
{
Expand All @@ -177,7 +178,7 @@
"\n",
"Naturally, you could just skip passing a loss function in `compile()`, and instead do\n",
"everything *manually* in `train_step`. Likewise for metrics. Here's a lower-level\n",
"example, that only uses `compile()` to configure the optimizer:\n"
"example, that only uses `compile()` to configure the optimizer:"
]
},
{
Expand Down Expand Up @@ -225,7 +226,7 @@
"# Just use `fit` as usual -- you can use callbacks, etc.\n",
"x = np.random.random((1000, 32))\n",
"y = np.random.random((1000, 1))\n",
"model.fit(x, y, epochs=3)\n"
"model.fit(x, y, epochs=3)"
]
},
{
Expand All @@ -243,7 +244,7 @@
"- Unpack `sample_weight` from the `data` argument\n",
"- Pass it to `compiled_loss` & `compiled_metrics` (of course, you could also just apply\n",
"it manually if you don't rely on `compile()` for losses & metrics)\n",
"- That's it. That's the list.\n"
"- That's it. That's the list."
]
},
{
Expand Down Expand Up @@ -301,7 +302,7 @@
"x = np.random.random((1000, 32))\n",
"y = np.random.random((1000, 1))\n",
"sw = np.random.random((1000, 1))\n",
"model.fit(x, y, sample_weight=sw, epochs=3)\n"
"model.fit(x, y, sample_weight=sw, epochs=3)"
]
},
{
Expand All @@ -313,7 +314,7 @@
"## Providing your own evaluation step\n",
"\n",
"What if you want to do the same for calls to `model.evaluate()`? Then you would\n",
"override `test_step` in exactly the same way. Here's what it looks like:\n"
"override `test_step` in exactly the same way. Here's what it looks like:"
]
},
{
Expand Down Expand Up @@ -349,7 +350,7 @@
"# Evaluate with our custom test_step\n",
"x = np.random.random((1000, 32))\n",
"y = np.random.random((1000, 1))\n",
"model.evaluate(x, y)\n"
"model.evaluate(x, y)"
]
},
{
Expand All @@ -369,7 +370,7 @@
"\"real\").\n",
"- One optimizer for each.\n",
"- A loss function to train the discriminator.\n",
"\n"
""
]
},
{
Expand Down Expand Up @@ -412,7 +413,7 @@
" layers.Conv2D(1, (7, 7), padding=\"same\", activation=\"sigmoid\"),\n",
" ],\n",
" name=\"generator\",\n",
")\n"
")"
]
},
{
Expand All @@ -422,7 +423,7 @@
},
"source": [
"Here's a feature-complete GAN class, overriding `compile()` to use its own signature,\n",
"and implementing the entire GAN algorithm in 17 lines in `train_step`:\n"
"and implementing the entire GAN algorithm in 17 lines in `train_step`:"
]
},
{
Expand Down Expand Up @@ -490,7 +491,7 @@
" grads = tape.gradient(g_loss, self.generator.trainable_weights)\n",
" self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights))\n",
" return {\"d_loss\": d_loss, \"g_loss\": g_loss}\n",
"\n"
""
]
},
{
Expand All @@ -499,7 +500,7 @@
"colab_type": "text"
},
"source": [
"Let's test-drive it:\n"
"Let's test-drive it:"
]
},
{
Expand Down Expand Up @@ -528,7 +529,7 @@
"\n",
"# To limit execution time, we only train on 100 batches. You can train on\n",
"# the entire dataset. You will need about 20 epochs to get nice results.\n",
"gan.fit(dataset.take(100), epochs=1)\n"
"gan.fit(dataset.take(100), epochs=1)"
]
},
{
Expand All @@ -537,7 +538,7 @@
"colab_type": "text"
},
"source": [
"The idea behind deep learning are simple, so why should their implementation be painful?\n"
"The idea behind deep learning are simple, so why should their implementation be painful?"
]
}
],
Expand Down
Loading

0 comments on commit 979ac97

Please sign in to comment.