Skip to content

Llama-3 Instruct tokenizer_config.json changes in relation to the currently fetched llama-bpe configs. #7289

Closed
@Spacellary

Description

@Spacellary

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Question/Conjecture:

I am performing model conversions as per the guidelines in this PR and using the llama-bpe configs fetched:

#6920 (comment)

...

The recent convert-hf-to-gguf-update.py script fetches the llama-bpe configs, but these reflect the ones from the Base model.

Recently, within the last week, there was a change to these settings in the meta-llama/Meta-Llama-3-8B-Instruct repo.

Is this change in the Instruct EOS pertinent to the current conversion process?

To add:
I haven't noticed any issues so far using either the Base model configs or the Instruct model configs.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions