Skip to content

Commit 5c618ef

Browse files
committedSep 5, 2019
Update REAME.md
1 parent 0e26c96 commit 5c618ef

File tree

1 file changed

+12
-12
lines changed

1 file changed

+12
-12
lines changed
 

‎neural-machine-translation/README.md

+12-12
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ As you already know, a Recurrent Neural Network, or RNN, is a network that opera
77
A seq2seq network(model), or Encoder-Decoder network, is a model consisting of two RNNs called the encoder and decoder. The encoder reads an input sequence and outputs a single vector, and the decoder reads that vector to produce an output sequence.
88

99
<p align="left">
10-
<img width="700" src="https://github.com/lyeoni/nlp-tutorial/blob/master/neural-machine-translation/images/readme/seq2seq.png">
10+
<img width="700" src="https://github.com/lyeoni/nlp-tutorial/blob/master/neural-machine-translation/data/images/readme/seq2seq.png">
1111
</p>
1212

1313
Unlike sequence prediction with a single RNN, where every input corresponds to an output, the seq2seq model frees us from sequence length and order, which makes it ideal for translation between two languages.
@@ -50,7 +50,7 @@ So we train using a method called Teacher Forcing, which is the concept of using
5050
<img src="https://latex.codecogs.com/gif.latex?\dpi{100}&space;\fn_cm&space;\hat{y}&space;=&space;argmax_{y}P(y|X,&space;y_{<t};\theta)\;&space;where\;&space;X&space;=&space;{x_{1},...,x_{n}}\;&space;and\;&space;Y=&space;{y_{0},...,y_{n}}" title="\hat{y} = argmax_{y}P(y|X, y_{<t};\theta)\; where\; X = {x_{1},...,x_{n}}\; and\; Y= {y_{0},...,y_{n}}" />
5151
</p>
5252
<p align="center">
53-
<img width="600" src="https://github.com/lyeoni/nlp-tutorial/blob/master/neural-machine-translation/images/readme/teacher-forcing.png" />
53+
<img width="600" src="https://github.com/lyeoni/nlp-tutorial/blob/master/neural-machine-translation/data/images/readme/teacher-forcing.png" />
5454
</p>
5555

5656
### Attention
@@ -210,20 +210,20 @@ Below table shows the results from various models in French-English translation
210210
### Visualizing Attention
211211

212212
<p align="center">
213-
<img src="images/sample-attention/sample-attn-1.png" height="300px">
214-
<img src="images/sample-attention/sample-attn-2.png" height="300px">
213+
<img src="data/images/sample-attention/sample-attn-1.png" height="300px">
214+
<img src="data/images/sample-attention/sample-attn-2.png" height="300px">
215215

216-
<img src="images/sample-attention/sample-attn-3.png" height="300px">
217-
<img src="images/sample-attention/sample-attn-4.png" height="300px">
216+
<img src="data/images/sample-attention/sample-attn-3.png" height="300px">
217+
<img src="data/images/sample-attention/sample-attn-4.png" height="300px">
218218

219-
<img src="images/sample-attention/sample-attn-5.png" height="300px">
220-
<img src="images/sample-attention/sample-attn-6.png" height="300px">
219+
<img src="data/images/sample-attention/sample-attn-5.png" height="300px">
220+
<img src="data/images/sample-attention/sample-attn-6.png" height="300px">
221221

222-
<img src="images/sample-attention/sample-attn-7.png" height="300px">
223-
<img src="images/sample-attention/sample-attn-8.png" height="300px">
222+
<img src="data/images/sample-attention/sample-attn-7.png" height="300px">
223+
<img src="data/images/sample-attention/sample-attn-8.png" height="300px">
224224

225-
<img src="images/sample-attention/sample-attn-9.png" height="300px">
226-
<img src="images/sample-attention/sample-attn-10.png" height="300px">
225+
<img src="data/images/sample-attention/sample-attn-9.png" height="300px">
226+
<img src="data/images/sample-attention/sample-attn-10.png" height="300px">
227227
</p>
228228

229229
## Acknowledgment

0 commit comments

Comments
 (0)