This repository was archived by the owner on May 1, 2025. It is now read-only.

Description
I notice you have three pretrained models, include seqlen256_v1.ckpt and seqlen512_v1.ckpt. And you say "Only difference is the sequence length used during training. The 512 model uses double the number of tokens as the 256 one for computing the attention but half the batch size (to prevent OOM)." so why in generate.py you set seq_length = min(args.generate_num, 256)?
If I used seqlen512_v1.ckpt model, should I set seq_length = min(args.generate_num, 512)?