Skip to content

Commit

Permalink
Merge pull request microsoft#239 from tsreaper/lesson-3-fix
Browse files Browse the repository at this point in the history
Fix incorrect formula in Perceptron.ipynb of lesson 3
  • Loading branch information
BethanyJep authored Oct 27, 2023
2 parents 4bc6213 + ef6570a commit 4df6476
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions lessons/3-NeuralNetworks/03-Perceptron/Perceptron.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@
" \\end{cases} \\\\\n",
"$$\n",
"\n",
"However, a generic linear model should also have a bias, i.e. ideally we should compute $y$ as $y=f(\\mathbf{w}^{\\mathrm{T}}\\mathbf{x})+\\mathbf{b}$. To simplify our model, we can get rid of this bias term by adding one more dimension to our input features, which always equals to 1:"
"However, a generic linear model should also have a bias, i.e. ideally we should compute $y$ as $y=f(\\mathbf{w}^{\\mathrm{T}}\\mathbf{x}+\\mathbf{b})$. To simplify our model, we can get rid of this bias term by adding one more dimension to our input features, which always equals to 1:"
]
},
{
Expand Down Expand Up @@ -215,7 +215,7 @@
" \n",
"We will use the process of **gradient descent**. Starting with some initial random weights $\\mathbf{w}^{(0)}$, we will adjust weights on each step of the training using the gradient of $E$:\n",
"\n",
"$$\\mathbf{w}^{\\tau + 1}=\\mathbf{w}^{\\tau} - \\eta \\nabla E(\\mathbf{w}) = \\mathbf{w}^{\\tau} + \\eta \\mathbf{x}_{n} t_{n}$$\n",
"$$\\mathbf{w}^{\\tau + 1}=\\mathbf{w}^{\\tau} - \\eta \\nabla E(\\mathbf{w}) = \\mathbf{w}^{\\tau} + \\eta\\sum_{n \\in \\mathcal{M}}\\mathbf{x}_{n} t_{n}$$\n",
"\n",
"where $\\eta$ is a **learning rate**, and $\\tau\\in\\mathbb{N}$ - number of iteration.\n",
"\n",
Expand Down

0 comments on commit 4df6476

Please sign in to comment.