You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I notice you have three pretrained models, include seqlen256_v1.ckpt and seqlen512_v1.ckpt. And you say "Only difference is the sequence length used during training. The 512 model uses double the number of tokens as the 256 one for computing the attention but half the batch size (to prevent OOM)." so why in generate.py you set seq_length = min(args.generate_num, 256)?
If I used seqlen512_v1.ckpt model, should I set seq_length = min(args.generate_num, 512)?
The text was updated successfully, but these errors were encountered:
Hi @luweishuang,
Not a developer, but I encountered the exact same issue with training.
You will have to change the value to 512 if training on the seqlen512_v1.ckpt model. I have modified this on the version I'm currently using to a function argument inputted at the time of cmd line entry so I would strongly recommend doing this.
All the best
Thursday
I notice you have three pretrained models, include seqlen256_v1.ckpt and seqlen512_v1.ckpt. And you say "Only difference is the sequence length used during training. The 512 model uses double the number of tokens as the 256 one for computing the attention but half the batch size (to prevent OOM)." so why in generate.py you set seq_length = min(args.generate_num, 256)?
If I used seqlen512_v1.ckpt model, should I set seq_length = min(args.generate_num, 512)?
The text was updated successfully, but these errors were encountered: