You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The function uses pad_str to identify and replace masked tokens with "". However, if pad_str happens to be the same as another special token in the vocabulary, it could lead to unintended replacements, making it appear that some tokens are masked when they actually aren't.
Recommendation
To avoid this potential confusion, consider one of the following approaches:
Use a unique string for masking that's guaranteed not to appear in the tokenizer's vocabulary.
Utilize a dedicated special token for masking (e.g., "") and add it to the tokenizer's special tokens.
Implement the masking logic directly on the token IDs before decoding, ensuring only the intended tokens are masked.
this could fix it, but we should instead add it when adding <|pretrain|> and the other one.
File:
src/instructlab/training/data_process.py
In the
get_masked_and_orig_text
function, there's a potential issue with the masking mechanism:Issue Description
The function uses
pad_str
to identify and replace masked tokens with "". However, ifpad_str
happens to be the same as another special token in the vocabulary, it could lead to unintended replacements, making it appear that some tokens are masked when they actually aren't.Recommendation
To avoid this potential confusion, consider one of the following approaches:
this could fix it, but we should instead add it when adding <|pretrain|> and the other one.
The text was updated successfully, but these errors were encountered: