-
Notifications
You must be signed in to change notification settings - Fork 31.1k
Integrate colqwen2.5 using colqwen2 modelling code #40600
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate colqwen2.5 using colqwen2 modelling code #40600
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice job! 👍️ Please see my remarks below.
CI/CD tests are breaking because double quotes instead of single quotes are used inside f-strings with double quotes. But that error will disappear because we should avoid using f-strings inside loggers in general.
src/transformers/models/colqwen2/convert_colqwen2_weights_to_hf.py
Outdated
Show resolved
Hide resolved
src/transformers/models/colqwen2/convert_colqwen2_weights_to_hf.py
Outdated
Show resolved
Hide resolved
src/transformers/models/colqwen2/convert_colqwen2_weights_to_hf.py
Outdated
Show resolved
Hide resolved
|
Seems like document retrieval / VLM so cc @zucchini-nlp and maybe @tomaarsen ? |
yonigozlan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @sahil-kabir, thanks for contributing! Very nice if we can support colqwen2.5 without needing to add a whole new model 🤗.
Could you add a mention of colqwen2.5 support in the documentation as well? In transformers/docs/source/en/model_doc/colqwen2.md.
It would be great to add an integration test for colqwen2.5 like we have for colqwen2 in transformers/tests/models/colqwen2/test_modeling_colqwen2.py.
| dtype, device = self._get_dtype_device() | ||
| pixel_values = pixel_values.to(dtype=dtype, device=device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a big fan of this, it'd be great if we can avoid having use_qwen2_5 in the config. Let's just use the dtype and device of input_embeds
| dtype, device = self._get_dtype_device() | |
| pixel_values = pixel_values.to(dtype=dtype, device=device) | |
| pixel_values = pixel_values.to(inputs_embeds.device, inputs_embeds.dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, in qwen-2 vision tower, we cast pixels to correct dtype manually so it is not needed. Also, LM and vision might be loaded with different dtypes and devices in specific cases :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zucchini-nlp Did I understand correctly that this line could be removed entirely and it should work anyway?
@sahil-kabir Maybe worth a quick try. 😉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, correct
| def _get_dtype_device(self) -> tuple[str, str]: | ||
| if self.config.use_qwen2_5: | ||
| parameters = next(self.vlm.visual.parameters()) | ||
| else: | ||
| parameters = next(self.parameters()) | ||
| dtype, device = parameters.dtype, parameters.device | ||
| return dtype, device | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need for that then
| def _get_dtype_device(self) -> tuple[str, str]: | |
| if self.config.use_qwen2_5: | |
| parameters = next(self.vlm.visual.parameters()) | |
| else: | |
| parameters = next(self.parameters()) | |
| dtype, device = parameters.dtype, parameters.device | |
| return dtype, device |
| config = ColQwen2Config( | ||
| vlm_config=original_config, | ||
| embedding_dim=128, # hardcoded in the original model | ||
| use_qwen2_5=use_qwen2_5, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instantiate a qwen2_5 config instead
…en2_modelling_code
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
…en2_modelling_code
|
Hey @yonigozlan, integration tests and documentation are updated. I have details on what I did to make this work in the PR description above. Going to undraft this now 👍 |
yonigozlan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
Very last thing to do would be to add an integration test directly against the original implementation. Other than that, LGTM!
@tonywu71 Do you think we could add the converted checkpoint to the vidore org and here?
Now waiting on @ArthurZucker or @Cyrilvallez final approval to get this merged!
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
@yonigozlan Should we also add the code and clear instructions for merging adapter weights either to |
|
run-slow: colqwen2 |
|
This comment contains run-slow, running the specified jobs: models: ['models/colqwen2'] |
Cyrilvallez
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks a lot, no diff to the original model 🤗
@yonigozlan, feel free to merge once everyone is happy and the tests are green! 🤗
Yes absolutely! Users should be able to convert the original colqwen2.5 weights to the Transformers weights using |
@antonioloison and @QuentinJGMace have the admin rights on the repository and are carrying the initial work on ColPali, can you check with them please? 🙏🏼 |
|
@yonigozlan confirmed the endpoint matches the original implementation and tests are added. I don't have access to a GPU with CUDA support (I have cpu and mps) so I set |
…5_using_colqwen2_modelling_code
|
Hey @sahil-kabir ! Sorry for the delay, I just updated the integration tests with the values I get when running on our CI hardware.
|
|
@bot /style |
|
Style bot fixed some files and pushed the changes. |
|
[For maintainers] Suggested jobs to run (before merge) run-slow: colqwen2 |
yonigozlan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding the conversion script @sahil-kabir ! Waiting for the CI to pass and I'll merge :)
* adding option for 2.5 * minor - arg in conversion script * getting started on modelling.py * minor - shouldve been using modular * adressing comments + fixing datatype/device _get method * minor * commiting suggestion Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> * docs + first test * ruff fix * minor fix * ruff fix * model fix * model fix * fine-grained check, with a hardcoded score from the original Hf implementation. * minor ruff * update tests values with CI hardware * adding 2.5 to conversion script * Apply style fixes --------- Co-authored-by: Sahil Kabir <sahilkabir@Sahils-MacBook-Pro.local> Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* adding option for 2.5 * minor - arg in conversion script * getting started on modelling.py * minor - shouldve been using modular * adressing comments + fixing datatype/device _get method * minor * commiting suggestion Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> * docs + first test * ruff fix * minor fix * ruff fix * model fix * model fix * fine-grained check, with a hardcoded score from the original Hf implementation. * minor ruff * update tests values with CI hardware * adding 2.5 to conversion script * Apply style fixes --------- Co-authored-by: Sahil Kabir <sahilkabir@Sahils-MacBook-Pro.local> Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com> Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Main contributor: @sahil-kabir
Guided by: @PavloFesenko
What does this PR do?
Fixes #39549 Is there plan to integrate ColQwen2.5 into Transformers?
A PR was made on this but was closed because colqwen2.5 could be integrated without making a new class for it. This PR allows weights for colqwen2.5 to be imported using the existing colqwen2 class.
Edit:
Looks like passing in weights for colqwen2.5 and the config for Qwen2.5 works on its own. All I'm adding now is a integration test and documentation.
I made an endpoint with colqwen2.5 weights by merging the adpater weights from
vidore/colqwen2.5-v0.2toQwen/Qwen2.5-VL-3B-Instruct(As per the documentation on the model card forvidore/colqwen2.5-v0.2) and shooting those weights up toSahil-Kabir/colqwen2.5-v0.2-hf. And moved some extra config files fromvidore/colqwen2.5-v0.2over toSahil-Kabir/colqwen2.5-v0.2.I've added an integration test to use my endpoint and show that it works on the basic tests. The test I wrote is passing.
Before submitting
Pull Request section?
to it if that's the case. here
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Hey @yonigozlan, with minimal code changes, it looks like colqwen2.5 can be imported using the colqwen2 class. By running,
!python transformers/src/transformers/models/colqwen2/convert_colqwen2_weights_to_hf.py --model_id cjkasbdkjnlakb/colqwen2.5-v0.2-merged --original_vlm_name_or_path Qwen/Qwen2.5-VL-3B-Instruct --output_dir ./colqwen2_5 --using_qwen2_5I'm able to get a model that seems to have similar outputs to the 2.5 in the colpali engine repo. @PavloFesenko and I are working on figuring out where one should import the colqwen2.5 adapter weights from, tests, and documentation for now.