Skip to content

Commit 64a4f24

Browse files
fayeshineVijay Vasudevan
authored and
Vijay Vasudevan
committed
Add footnote about dropout in MNIST tutorial for expert
* Update index.md * Update index.md * Update index.md
1 parent b0774e1 commit 64a4f24

File tree

1 file changed

+3
-1
lines changed
  • tensorflow/g3doc/tutorials/mnist/pros

1 file changed

+3
-1
lines changed

tensorflow/g3doc/tutorials/mnist/pros/index.md

+3-1
Original file line numberDiff line numberDiff line change
@@ -340,7 +340,7 @@ We create a `placeholder` for the probability that a neuron's output is kept
340340
during dropout. This allows us to turn dropout on during training, and turn it
341341
off during testing.
342342
TensorFlow's `tf.nn.dropout` op automatically handles scaling neuron outputs in
343-
addition to masking them, so dropout just works without any additional scaling.
343+
addition to masking them, so dropout just works without any additional scaling.<sup id="a1">[1](#f1)</sup>
344344

345345
```python
346346
keep_prob = tf.placeholder(tf.float32)
@@ -391,3 +391,5 @@ The final test set accuracy after running this code should be approximately 99.2
391391

392392
We have learned how to quickly and easily build, train, and evaluate a
393393
fairly sophisticated deep learning model using TensorFlow.
394+
395+
<b id="f1">1</b>: For this small convolutional network, performance is actually nearly identical with and without dropout. Dropout is often very effective at reducing overfitting, but it is most useful when training very large neural networks. [](#a1)

0 commit comments

Comments
 (0)