You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to finetune the concode task using 'code' as both input & output, instead of 'nl' & 'code'. I wanted to know if we can directly use the concode finetuned checkpoints of concode task and some more information about using tokenizers and embeddings?
Also, where are all changes need to be done here to load concode model instead of the base codet5 model ??
I'm trying to finetune the concode task using 'code' as both input & output, instead of 'nl' & 'code'. I wanted to know if we can directly use the concode finetuned checkpoints of concode task and some more information about using tokenizers and embeddings?
Also, where are all changes need to be done here to load concode model instead of the base codet5 model ??
parser.add_argument("--model_tag", type=str, default='codet5_base',
choices=['roberta', 'codebert', 'bart_base', 'codet5_small', 'codet5_base', 'codet5_large']).
Thanks!!
The text was updated successfully, but these errors were encountered: