-
Notifications
You must be signed in to change notification settings - Fork 2k
Training too slow #183
Comments
i am using 100k vocabulary size and 10 million training data, it take 32 hours to training 127k steps with around 17 BLUE for english to chinese. batch size is set to 64. 1.use batch size as big as possible, as long as GPU can support here is the command: |
I have a question to ask: My training language is about 70,000 sentences. How many of my dictionary sizes are appropriate? Thank you. |
It is a small corpus. You can try 50k. You can also use larger size if you cpu/gpu allows.
…________________________________
发件人: zhaoyaping <notifications@github.com>
发送时间: 2018年4月18日 16:34:20
收件人: tensorflow/nmt
抄送: brightmart; Comment
主题: Re: [tensorflow/nmt] Training too slow (#183)
I have a question to ask: My training language is about 70,000 sentences. How many of my dictionary sizes are appropriate? Thank you.
―
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#183 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ASuYMPwwRogmHAkncGy2i_L6KiVxCNtGks5tpvqMgaJpZM4Qm2tH>.
|
I would like to ask if 50k is equivalent to 50000 (dictionary size)?I'm a neural network beginner (smile). |
hi,
50k=50,000
…________________________________
发件人: zhaoyaping <notifications@github.com>
发送时间: 2018年4月18日 17:19
收件人: tensorflow/nmt
抄送: brightmart; Comment
主题: Re: [tensorflow/nmt] Training too slow (#183)
I would like to ask if 50k is equivalent to 5000 (dictionary size)?I'm a neural network beginner (smile).
Thank you.
―
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#183 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ASuYMFCCzgAQd-VSsH2vLqYvE2SafSceks5tpwUhgaJpZM4Qm2tH>.
|
@brightmart What is the size of your dev set ? Does it matter if we have bigger dev set, then training takes longer to complete? |
if you have a big dev set, you can choose part of dev set to evaluate during training. |
Hi
Is there any possible way to accelerate the code ?
I am running a training data with only 300 vocabulary size and 3w training instances with maximum length 50, but it takes almost 1 hour to finish training a epoch
What happened to this version of code
Thanks
The text was updated successfully, but these errors were encountered: