| language | tags | widget | license | ||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
en |
|
|
mit |
This repository is a boilerplate to push a mask-filling model to the HuggingFace Model Hub.
git-lfsis installed- tokenizer contains all the files needed:
added_tokens.json,special_tokens_map.json,tokenizer_config.json,vocab.txtandtokenizer.json - no
tokenizer_filefield intokenizer_config.json(sometimes it is located locally at~/.cache)
- Put the model checkpoints and optionally log files (
*.binand log filesevents.out.*) to the./ckptdirectory. - Add a branch
hgfto point to your huggingface repo. For examplegit remote add hgf git@hf.co:approach0/mathy-vicuna-13B-FFT - Run the
upload2hgf.shscript.
pip install pya0 # for math token preprocessing
# testing local checkpoints:
python test.py ./ckpt/math-tokenizer ./ckpt/2-2-0/encoder.ckpt
# testing Model Hub checkpoints:
python test.py approach0/coco-mae-220 approach0/coco-mae-220Note
Modify the test examples intest.txtto play with it. The test file is tab-separated, the first column is additional positions you want to mask for the right-side sentence (useful for masking tokens in math markups). A zero means no additional mask positions.