Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix AutoConfig and AutoModel support for Llava-Next-Video #32844

Merged
merged 4 commits into from
Aug 16, 2024

Conversation

TKONIY
Copy link
Contributor

@TKONIY TKONIY commented Aug 16, 2024

What does this PR do?

Bug fix: Llava-Next-Video model cannot be loaded by AutoConfig and AutoModel.

  • Support AutoConfig: Unify the model_type with the config.json in hugging face.
  • Support AutoModel: Add llava* models to MODEL_MAPPING.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@zucchini-nlp

@zucchini-nlp
Copy link
Member

Oh, I see now. I changed hub models to underscore naming but didn't know transformers had it this way. IMO the best way here is to modify files on the hub to use model_type=llava-next-video. Feel free to open a PR on the hub :)

For AutoModel we don't support VLMs because they have no base model (i.e. without lm_head) and the ConditionalGeneration models should be loaded with AutoModelForVision2Seq.

@TKONIY TKONIY force-pushed the fix-llavanextvideo-modeltype branch from dd49f58 to 32f9ec0 Compare August 16, 2024 09:56
@TKONIY
Copy link
Contributor Author

TKONIY commented Aug 16, 2024

Oh, I see now. I changed hub models to underscore naming but didn't know transformers had it this way. IMO the best way here is to modify files on the hub to use model_type=llava-next-video. Feel free to open a PR on the hub :)

For AutoModel we don't support VLMs because they have no base model (i.e. without lm_head) and the ConditionalGeneration models should be loaded with AutoModelForVision2Seq.

Thanks! I have removed the AutoModel comit.

But I think that model_type="llava_next_video" would be better because the "llava_next" model type name is also underscore. And the model_type in the implementation files were also "llava_next_video", which need to be consistent.

@zucchini-nlp
Copy link
Member

@TKONIY fair enough. Can you run make fix-copies to make th CI happy?

* Change llava-next-video.md file name into llava_next_video.md to make it compatible with implementation
Copy link
Member

@zucchini-nlp zucchini-nlp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks! Retriggered the tests which were failing with timeout

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing!

@amyeroberts amyeroberts merged commit a27182b into huggingface:main Aug 16, 2024
24 checks passed
@TKONIY
Copy link
Contributor Author

TKONIY commented Sep 1, 2024

Dear @amyeroberts, do you know when this commit will be merged into a released version of Transformers? It is important since vLLM's video support can be merged into the main branch only when Transformers can correctly load Llava-Next-Video by AutoConfig, which is fixed by this commit. And the pending video support also blocks other PRs. Therefore, it would be great if you could help us by adding this fixing to the next release. Thank you!

@amyeroberts
Copy link
Collaborator

@TKONIY All commits currently merged into main will be part of the next minor release (but not necessarily a patch release). Our release schedule is monthly, so v4.45 will probably be this week or the next

@TKONIY
Copy link
Contributor Author

TKONIY commented Sep 2, 2024

@amyeroberts Thanks for the information. Really appreciate your work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants