You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the model! This is regarding the default behavior on Huggingface. When running a batch forward pass on the model for inference, there is an issue with the attention mask created by the tokenizer. Feeding the model the attention mask tensor throws an error because the tokenizer makes the attention mask as integers whereas a downstream step expects floats. This can be fixed by simply changing the datatype to a float before the forward pass, but this is another step for the user to figure out. Can this become a default tokenizer step?