-
Notifications
You must be signed in to change notification settings - Fork 2k
get alignments using attention #138
Comments
Hi @kmario23 , We save the alignment as an image in the _sample_decode method if you don't use a beam search decoder with an attention model. The image can be visualized in tensorboard. You can look at this method to see how we extract the alignment history from the final state. |
Hi @kmario23 , Have you solved it? Thanks |
Hi @oahziur , |
I do use a beam search decoder too, and also with num_translations_per_input > 1, Can you point us to where to look at to get the alignments with a beam search decoder? |
@oahziur, I see that they check if beam width is 0. Is there a reason? Does beam search prevent the alignment from happening? |
I try to get alignment using beam width 0 but can't. Is there any way to visualize the alignment for a particular sentence? |
Is there any progress in solving that issue? |
Hi,
With a trained model checkpoint, I evaluated it on a set of 'test' sentences. In the result, I get some
<unk>
tokens in translated sentences. Now, I'd like to solve this issue by figuring out which words in source sentence caused theseunk
tokens.For this, as far as I understand, we can use the
attention mechanism
to get the word alignments? But, I'm not sure where exactly I should start doing this in the current NMT codebase. could someone please point out the exact point where such an implementation can be done?Or is there any other way to get the alignment of words?
Interestingly, a dictionary can be extracted from alignment with scores as mentioned here: En-Vi alignment dictionary
It'd be nice to know how to get this dictionary. Maybe @lmthang can offer suggestions please?
Thanks!
The text was updated successfully, but these errors were encountered: