-
Notifications
You must be signed in to change notification settings - Fork 24
Open
Description
This project looks exciting. Why don't you add a few baseline models to compare the BERT model with? For example, you can try to solve the task using a traditional n-gram language model or a neural one like fairseq pretrained wiki103 lm. I'm curious if BERT outperforms those two competitors.
graykode
Metadata
Metadata
Assignees
Labels
No labels