Pre-training BERT Model for Contrastive Learning
Switch Branch to squad 1 and squad 2 for SQuAD 1.1 and SQuAD 2.0 results respectively.
@article{DBLP:journals/corr/abs-2111-04198,
author = {Yixuan Su and
Fangyu Liu and
Zaiqiao Meng and
Tian Lan and
Lei Shu and
Ehsan Shareghi and
Nigel Collier},
title = {TaCL: Improving {BERT} Pre-training with Token-aware Contrastive Learning},
journal = {CoRR},
volume = {abs/2111.04198},
year = {2021},
url = {https://arxiv.org/abs/2111.04198},
eprinttype = {arXiv},
eprint = {2111.04198},
timestamp = {Wed, 10 Nov 2021 16:07:30 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-04198.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}```
Reference for creating Basic Tokenizer - https://github.com/jcyk/BERT