Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
lyeoni authored Feb 18, 2019
1 parent d35d594 commit f2b5a49
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions neural-machine-translation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,9 +63,9 @@ Calculating the attention weights is done with a bach matrix-matrix products of

Below video in [Jay Alammar's blog post](http://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/) shows how the attention mechanism enables the decoder to focus on the relevant parts of the input sequence.

<center>
<iframe width="700" height="350" src="http://jalammar.github.io/images/attention_process.mp4" frameborder="0"></iframe>
</center>
<p align="center">
<img width="600" src="http://jalammar.github.io/images/attention.png" />
</p>

## Usage

Expand Down Expand Up @@ -196,7 +196,7 @@ so it is recommended that the BLEU be considered as a reference only because it

Below table shows the results from various models in French-English translation task.

|Target|GRU|LSTM|Reverse|Reverse+Embeddings|NMT|
|Target|GRU|LSTM|Reverse|Reverse<br>+Embeddings|NMT|
|------|------|------|------|------|------|
|I have done it already.|I've done it.|I did it already.|I already did it.|I've already done it.|I already did that.|
|You don't have to stay to the end.|You don't have to stay to the end.|You don't have to stay in the end of here.|you don't seem to to stay up.|You don't have to get until the end.|You don't have to stay until the end.|
Expand Down

0 comments on commit f2b5a49

Please sign in to comment.