Skip to content

Commit ef206f9

Browse files
authored
Merge pull request #7 from Ahmkel/master
Fixed typos in lab 1
2 parents 06ffaca + 8efd0d1 commit ef206f9

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

lab1_FFN/lab1_FFN.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@
6565
"metadata": {},
6666
"source": [
6767
"# Neural networks 101\n",
68-
"In this notebook you will implement a simple neural network in TensorFlow utilizing the graph building and automatic differentiation engine of TensorFlow. We assume that you are already familiar with backpropation (if not please see [Andrej Karpathy](http://cs.stanford.edu/people/karpathy/) or [Michal Nielsen](http://neuralnetworksanddeeplearning.com/chap2.html).\n",
68+
"In this notebook you will implement a simple neural network in TensorFlow utilizing the graph building and automatic differentiation engine of TensorFlow. We assume that you are already familiar with backpropagation (if not please see [Andrej Karpathy](http://cs.stanford.edu/people/karpathy/) or [Michal Nielsen](http://neuralnetworksanddeeplearning.com/chap2.html).\n",
6969
"We'll not spend much time on how TensorFlow works, but you can refer to [this short tutorial](https://www.tensorflow.org/versions/r0.10/get_started/basic_usage.html) if you are interested, or [the python documentation](https://www.tensorflow.org/versions/r0.10/api_docs/index.html).\n",
7070
"\n",
7171
"(Additionally, for the ambitious people we have previously made an assignment where you will implement both the forward and backpropagation in a neural network by hand, https://github.com/DTU-deeplearning/day1-NN/blob/master/exercises_1.ipynb)(Ole, skal jeg også implementere det?)\n",
@@ -284,7 +284,7 @@
284284
"source": [
285285
"To train our neural network we need to update the parameters in direction of the negative gradient w.r.t the cost function we defined earlier.\n",
286286
"We can use `tf.train.Optimizer` to get the gradients (using `compute_gradients`) for all parameters in the network w.r.t ``cost_train``.\n",
287-
"Imaggine that `cost_train` is a function and we want to go downhill. We go downhill by changing the value of the paramters in direction of the negative gradient. \n",
287+
"Imagine that `cost_train` is a function and we want to go downhill. We go downhill by changing the value of the paramters in direction of the negative gradient. \n",
288288
"\n",
289289
"Finally we can use the built-in `minimize` to calculate the stochastic gradient descent (SGD) update rule for each paramter in the network.\n",
290290
"\n",

0 commit comments

Comments
 (0)