Skip to content

Commit

Permalink
Update generated files
Browse files Browse the repository at this point in the history
  • Loading branch information
fchollet committed Aug 31, 2020
1 parent e5f9c43 commit 3d28583
Show file tree
Hide file tree
Showing 22 changed files with 375 additions and 363 deletions.
2 changes: 1 addition & 1 deletion examples/keras_recipes/md/tfrecord.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
**Last modified:** 2020/08/07<br>


<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/keras_recipesipynb/tfrecord.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/keras_recipestfrecord.py)
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/keras_recipes/ipynb/tfrecord.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/keras_recipes/tfrecord.py)


**Description:** Loading TFRecords for computer vision models.
Expand Down
2 changes: 1 addition & 1 deletion examples/nlp/ipynb/pretrained_word_embeddings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -308,7 +308,7 @@
"outputs": [],
"source": [
"voc = vectorizer.get_vocabulary()\n",
"word_index = dict(zip(voc, range(2, len(voc))))"
"word_index = dict(zip(voc, range(len(voc))))"
]
},
{
Expand Down
50 changes: 26 additions & 24 deletions examples/nlp/md/pretrained_word_embeddings.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
**Description:** Text classification on the Newsgroup20 dataset using pre-trained GloVe word embeddings.


<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/nlpipynb/pretrained_word_embeddings.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/nlppretrained_word_embeddings.py)
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/nlp/ipynb/pretrained_word_embeddings.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/nlp/pretrained_word_embeddings.py)



Expand Down Expand Up @@ -257,7 +257,7 @@ Here's a dict mapping words to their indices:

```python
voc = vectorizer.get_vocabulary()
word_index = dict(zip(voc, range(2, len(voc))))
word_index = dict(zip(voc, range(len(voc))))
```

As you can see, we obtain the same encoding as above for our test sentence:
Expand All @@ -273,7 +273,7 @@ test = ["the", "cat", "sat", "on", "the", "mat"]

<div class="k-default-codeblock">
```
[4, 3699, 1688, 17, 4, 5945]
[2, 3697, 1686, 15, 2, 5943]
```
</div>
Expand Down Expand Up @@ -344,7 +344,7 @@ print("Converted %d words (%d misses)" % (hits, misses))

<div class="k-default-codeblock">
```
Converted 17997 words (2001 misses)
Converted 17999 words (2001 misses)
```
</div>
Expand Down Expand Up @@ -453,45 +453,47 @@ model.fit(x_train, y_train, batch_size=128, epochs=20, validation_data=(x_val, y
<div class="k-default-codeblock">
```
Epoch 1/20
125/125 [==============================] - 8s 59ms/step - loss: 3.0124 - acc: 0.0669 - val_loss: 2.9522 - val_acc: 0.0725
125/125 [==============================] - 8s 57ms/step - loss: 2.8766 - acc: 0.0945 - val_loss: 2.0770 - val_acc: 0.2956
Epoch 2/20
125/125 [==============================] - 7s 58ms/step - loss: 2.9439 - acc: 0.0888 - val_loss: 2.9447 - val_acc: 0.0818
125/125 [==============================] - 7s 58ms/step - loss: 2.0792 - acc: 0.2887 - val_loss: 1.6626 - val_acc: 0.4076
Epoch 3/20
125/125 [==============================] - 8s 60ms/step - loss: 2.8670 - acc: 0.1072 - val_loss: 2.8869 - val_acc: 0.1015
125/125 [==============================] - 7s 60ms/step - loss: 1.5632 - acc: 0.4527 - val_loss: 1.3000 - val_acc: 0.5609
Epoch 4/20
125/125 [==============================] - 8s 61ms/step - loss: 2.7579 - acc: 0.1387 - val_loss: 2.7014 - val_acc: 0.1370
125/125 [==============================] - 8s 60ms/step - loss: 1.2945 - acc: 0.5612 - val_loss: 1.2282 - val_acc: 0.5944
Epoch 5/20
125/125 [==============================] - 8s 61ms/step - loss: 2.6353 - acc: 0.1703 - val_loss: 2.5607 - val_acc: 0.1870
125/125 [==============================] - 8s 61ms/step - loss: 1.1137 - acc: 0.6209 - val_loss: 1.0695 - val_acc: 0.6409
Epoch 6/20
125/125 [==============================] - 8s 63ms/step - loss: 2.4557 - acc: 0.2215 - val_loss: 2.4721 - val_acc: 0.2081
125/125 [==============================] - 8s 61ms/step - loss: 0.9556 - acc: 0.6718 - val_loss: 1.1743 - val_acc: 0.6124
Epoch 7/20
125/125 [==============================] - 8s 64ms/step - loss: 2.3065 - acc: 0.2597 - val_loss: 2.3387 - val_acc: 0.2478
125/125 [==============================] - 8s 61ms/step - loss: 0.8235 - acc: 0.7172 - val_loss: 1.0126 - val_acc: 0.6602
Epoch 8/20
125/125 [==============================] - 8s 65ms/step - loss: 2.1402 - acc: 0.3081 - val_loss: 2.5133 - val_acc: 0.2226
125/125 [==============================] - 8s 65ms/step - loss: 0.7268 - acc: 0.7475 - val_loss: 1.0608 - val_acc: 0.6632
Epoch 9/20
125/125 [==============================] - 8s 64ms/step - loss: 1.9627 - acc: 0.3493 - val_loss: 2.2369 - val_acc: 0.2806
125/125 [==============================] - 8s 63ms/step - loss: 0.6441 - acc: 0.7759 - val_loss: 1.0606 - val_acc: 0.6664
Epoch 10/20
125/125 [==============================] - 8s 66ms/step - loss: 1.8266 - acc: 0.3889 - val_loss: 2.2180 - val_acc: 0.3103
125/125 [==============================] - 8s 63ms/step - loss: 0.5409 - acc: 0.8120 - val_loss: 1.0380 - val_acc: 0.6884
Epoch 11/20
125/125 [==============================] - 8s 67ms/step - loss: 1.6598 - acc: 0.4411 - val_loss: 2.2910 - val_acc: 0.2988
125/125 [==============================] - 8s 65ms/step - loss: 0.4846 - acc: 0.8273 - val_loss: 1.1073 - val_acc: 0.6729
Epoch 12/20
125/125 [==============================] - 8s 68ms/step - loss: 1.5119 - acc: 0.4961 - val_loss: 2.3962 - val_acc: 0.3083
125/125 [==============================] - 8s 62ms/step - loss: 0.4173 - acc: 0.8553 - val_loss: 1.1256 - val_acc: 0.6864
Epoch 13/20
125/125 [==============================] - 8s 68ms/step - loss: 1.3293 - acc: 0.5510 - val_loss: 2.6619 - val_acc: 0.2941
125/125 [==============================] - 8s 63ms/step - loss: 0.3419 - acc: 0.8808 - val_loss: 1.1576 - val_acc: 0.6979
Epoch 14/20
125/125 [==============================] - 9s 70ms/step - loss: 1.2461 - acc: 0.5798 - val_loss: 2.5586 - val_acc: 0.2968
125/125 [==============================] - 8s 68ms/step - loss: 0.2869 - acc: 0.9053 - val_loss: 1.1381 - val_acc: 0.6974
Epoch 15/20
125/125 [==============================] - 10s 76ms/step - loss: 1.0760 - acc: 0.6296 - val_loss: 2.4376 - val_acc: 0.3443
125/125 [==============================] - 8s 67ms/step - loss: 0.2617 - acc: 0.9118 - val_loss: 1.3850 - val_acc: 0.6747
Epoch 16/20
125/125 [==============================] - 10s 78ms/step - loss: 0.9680 - acc: 0.6772 - val_loss: 2.9347 - val_acc: 0.3233
125/125 [==============================] - 8s 67ms/step - loss: 0.2543 - acc: 0.9152 - val_loss: 1.3119 - val_acc: 0.6972
Epoch 17/20
125/125 [==============================] - 10s 79ms/step - loss: 0.8464 - acc: 0.7164 - val_loss: 2.4003 - val_acc: 0.3903
125/125 [==============================] - 8s 66ms/step - loss: 0.2109 - acc: 0.9267 - val_loss: 1.3145 - val_acc: 0.6954
Epoch 18/20
125/125 [==============================] - 9s 74ms/step - loss: 0.7512 - acc: 0.7500 - val_loss: 2.8509 - val_acc: 0.3223
125/125 [==============================] - 8s 64ms/step - loss: 0.1939 - acc: 0.9364 - val_loss: 1.4054 - val_acc: 0.7009
Epoch 19/20
125/125 [==============================] - 8s 68ms/step - loss: 0.6477 - acc: 0.7852 - val_loss: 3.6992 - val_acc: 0.3081
125/125 [==============================] - 8s 67ms/step - loss: 0.1873 - acc: 0.9379 - val_loss: 1.7441 - val_acc: 0.6667
Epoch 20/20
71/125 [================>.............] - ETA: 3s - loss: 0.5732 - acc: 0.8188
125/125 [==============================] - 9s 70ms/step - loss: 0.1762 - acc: 0.9420 - val_loss: 1.5269 - val_acc: 0.6927
<tensorflow.python.keras.callbacks.History at 0x157134890>
```
</div>
Expand Down
20 changes: 10 additions & 10 deletions examples/nlp/md/text_classification_from_scratch.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
**Description:** Text sentiment classification starting from raw text files.


<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/nlpipynb/text_classification_from_scratch.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/nlptext_classification_from_scratch.py)
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/nlp/ipynb/text_classification_from_scratch.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/nlp/text_classification_from_scratch.py)



Expand Down Expand Up @@ -42,7 +42,7 @@ Let's download the data and inspect its structure.
```
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 80.2M 100 80.2M 0 0 51.5M 0 0:00:01 0:00:01 --:--:-- 51.5M
100 80.2M 100 80.2M 0 0 45.3M 0 0:00:01 0:00:01 --:--:-- 45.3M
```
</div>
Expand Down Expand Up @@ -336,13 +336,13 @@ model.fit(train_ds, validation_data=val_ds, epochs=epochs)
<div class="k-default-codeblock">
```
Epoch 1/3
625/625 [==============================] - 33s 52ms/step - loss: 0.6070 - accuracy: 0.6100 - val_loss: 0.3287 - val_accuracy: 0.8582
625/625 [==============================] - 32s 51ms/step - loss: 0.6288 - accuracy: 0.5835 - val_loss: 0.3283 - val_accuracy: 0.8610
Epoch 2/3
625/625 [==============================] - 34s 54ms/step - loss: 0.2665 - accuracy: 0.8930 - val_loss: 0.3320 - val_accuracy: 0.8676
625/625 [==============================] - 31s 50ms/step - loss: 0.2808 - accuracy: 0.8859 - val_loss: 0.3005 - val_accuracy: 0.8796
Epoch 3/3
625/625 [==============================] - 34s 55ms/step - loss: 0.1360 - accuracy: 0.9505 - val_loss: 0.3847 - val_accuracy: 0.8672
625/625 [==============================] - 31s 50ms/step - loss: 0.1450 - accuracy: 0.9467 - val_loss: 0.3795 - val_accuracy: 0.8726
<tensorflow.python.keras.callbacks.History at 0x14bd13c50>
<tensorflow.python.keras.callbacks.History at 0x137444c90>
```
</div>
Expand All @@ -356,9 +356,9 @@ model.evaluate(test_ds)

<div class="k-default-codeblock">
```
782/782 [==============================] - 8s 10ms/step - loss: 0.3904 - accuracy: 0.8614
782/782 [==============================] - 7s 9ms/step - loss: 0.3999 - accuracy: 0.8650
[0.3903675377368927, 0.8613600134849548]
[0.39986345171928406, 0.8649600148200989]
```
</div>
Expand Down Expand Up @@ -389,9 +389,9 @@ end_to_end_model.evaluate(raw_test_ds)

<div class="k-default-codeblock">
```
782/782 [==============================] - 13s 16ms/step - loss: 0.3903 - accuracy: 0.8608
782/782 [==============================] - 11s 13ms/step - loss: 0.4062 - accuracy: 0.8630
[0.39036768674850464, 0.8613600134849548]
[0.3998638987541199, 0.8649600148200989]
```
</div>
2 changes: 1 addition & 1 deletion examples/nlp/pretrained_word_embeddings.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@
"""

voc = vectorizer.get_vocabulary()
word_index = dict(zip(voc, range(2, len(voc))))
word_index = dict(zip(voc, range(len(voc))))

"""
As you can see, we obtain the same encoding as above for our test sentence:
Expand Down
2 changes: 1 addition & 1 deletion examples/vision/md/super_resolution_sub_pixel.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
**Description:** Implementing Super-Resolution using Efficient sub-pixel model on BSDS500.


<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/visionipynb/super_resolution_sub_pixel.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/visionsuper_resolution_sub_pixel.py)
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/super_resolution_sub_pixel.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/vision/super_resolution_sub_pixel.py)



Expand Down
Binary file modified guides/img/functional_api/functional_api_22_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified guides/img/functional_api/functional_api_40_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified guides/img/functional_api/functional_api_51_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
31 changes: 16 additions & 15 deletions guides/md/distributed_training.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
**Description:** Guide to multi-GPU & distributed training for Keras models.


<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/guidesipynb/distributed_training.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/guidesdistributed_training.py)
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/guides/ipynb/distributed_training.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/guides/distributed_training.py)



Expand Down Expand Up @@ -181,21 +181,16 @@ model.evaluate(test_dataset)
<div class="k-default-codeblock">
```
WARNING: Logging before flag parsing goes to stderr.
W0814 09:55:55.262259 4750663104 cross_device_ops.py:1202] There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.
W0829 16:54:57.025418 4592479680 cross_device_ops.py:1115] There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.
Number of devices: 1
W0814 09:55:55.961071 4750663104 deprecation.py:323] From /usr/local/lib/python3.7/site-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Iterator.get_next_as_optional()` instead.
Epoch 1/2
1563/1563 [==============================] - 3s 2ms/step - loss: 0.2290 - sparse_categorical_accuracy: 0.9323 - val_loss: 0.1623 - val_sparse_categorical_accuracy: 0.9498
1563/1563 [==============================] - 3s 2ms/step - loss: 0.3767 - sparse_categorical_accuracy: 0.8889 - val_loss: 0.1257 - val_sparse_categorical_accuracy: 0.9623
Epoch 2/2
1563/1563 [==============================] - 2s 1ms/step - loss: 0.0946 - sparse_categorical_accuracy: 0.9709 - val_loss: 0.1025 - val_sparse_categorical_accuracy: 0.9683
313/313 [==============================] - 0s 802us/step - loss: 0.1029 - sparse_categorical_accuracy: 0.9677
1563/1563 [==============================] - 2s 2ms/step - loss: 0.1053 - sparse_categorical_accuracy: 0.9678 - val_loss: 0.0944 - val_sparse_categorical_accuracy: 0.9710
313/313 [==============================] - 0s 779us/step - loss: 0.0900 - sparse_categorical_accuracy: 0.9723
[0.1029156744480133, 0.9677000045776367]
[0.08995261788368225, 0.9722999930381775]
```
</div>
Expand Down Expand Up @@ -266,15 +261,21 @@ run_training(epochs=1)

<div class="k-default-codeblock">
```
W0814 09:56:01.662528 4750663104 cross_device_ops.py:1202] There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.
W0829 16:55:03.609519 4592479680 cross_device_ops.py:1115] There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.
Creating a new model
1563/1563 - 3s - loss: 0.2276 - sparse_categorical_accuracy: 0.9320 - val_loss: 0.1319 - val_sparse_categorical_accuracy: 0.9579
W0814 09:56:04.756444 4750663104 cross_device_ops.py:1202] There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.
W0829 16:55:03.708506 4592479680 callbacks.py:1270] Automatic model reloading for interrupted job was removed from the `ModelCheckpoint` callback in multi-worker mode, please use the `keras.callbacks.experimental.BackupAndRestore` callback instead. See this tutorial for details: https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#backupandrestore_callback.
1563/1563 - 4s - loss: 0.2242 - sparse_categorical_accuracy: 0.9321 - val_loss: 0.1243 - val_sparse_categorical_accuracy: 0.9647
W0829 16:55:07.981292 4592479680 cross_device_ops.py:1115] There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.
Restoring from ./ckpt/ckpt-1
1563/1563 - 3s - loss: 0.0976 - sparse_categorical_accuracy: 0.9698 - val_loss: 0.1005 - val_sparse_categorical_accuracy: 0.9690
W0829 16:55:08.245935 4592479680 callbacks.py:1270] Automatic model reloading for interrupted job was removed from the `ModelCheckpoint` callback in multi-worker mode, please use the `keras.callbacks.experimental.BackupAndRestore` callback instead. See this tutorial for details: https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#backupandrestore_callback.
1563/1563 - 4s - loss: 0.0948 - sparse_categorical_accuracy: 0.9709 - val_loss: 0.1006 - val_sparse_categorical_accuracy: 0.9699
```
</div>
Expand Down
Loading

0 comments on commit 3d28583

Please sign in to comment.