Skip to content

Fix Mistral v0.3 chat template and tokenizer issues #72

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

Imagineer99
Copy link

Issue: Weird repetitive outputs when using the Mistral 7B v0.3 conversational notebook before and after training when using chatml template

The tokenizer was using control token 770 for padding instead of the EOS token, which was messing up training. Also the ShareGPT format wasn't converting properly to what Mistral v0.3 expects.

What I changed:
Made the pad token use EOS instead of the control token
Added manual conversion from ShareGPT to Mistral format as a workaround
Lowered the learning rate so gradients don't explode

The manual format conversion is kinda hacky but works until we figure out why the automatic conversion broke. Probably some update for newer models messed with the older ones.

Other conversational and mistral v0.3 notebooks need to checked for similar issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant