Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Model][VLM] Add Qwen2-VL model support #7905

Merged
merged 44 commits into from
Sep 11, 2024
Merged
Changes from 1 commit
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
0a648b2
Add support to Qwen2-VL.
fyabc Aug 23, 2024
320df57
Merge branch 'refs/heads/main' into add_qwen2_vl_new
fyabc Aug 26, 2024
7f96df8
Reformat
fyabc Aug 27, 2024
fbf2b8b
Merge branch 'refs/heads/main' into add_qwen2_vl_new
fyabc Aug 27, 2024
bcaff4f
Update transformers link.
fyabc Aug 27, 2024
f2185bf
Bugfix of mrope_input_positions in model_runner.py.
fyabc Aug 27, 2024
60448cb
Rename pixel_values_video to pixel_values_videos in qwen2_vl.py.
fyabc Aug 27, 2024
71a77b1
Fix the bug of MultiModalInputs.batch() when passing different modali…
fyabc Aug 27, 2024
60c4cbd
Fix the bug when running OpenAI-compatible API server.
fyabc Aug 27, 2024
e29ff54
Merge branch 'refs/heads/main' into add_qwen2_vl_new
fyabc Aug 29, 2024
ddb7138
Refactor qwen2_vl.py based on review comments.
fyabc Aug 29, 2024
14fe12a
reformat
fyabc Aug 29, 2024
89def23
reformat
fyabc Aug 29, 2024
e721e60
Fix the bug of model_is_mrope in model_runner.py.
fyabc Aug 29, 2024
d66d167
fix type hints in qwen2_vl.py
fyabc Aug 29, 2024
acd85ed
Update mm input processors according to new MultiModalInput.batch() i…
fyabc Aug 29, 2024
8d762c6
Merge branch 'refs/heads/main' into add_qwen2_vl_new
fyabc Aug 30, 2024
87ba5ed
Fix SamplerOutput.
fyabc Aug 30, 2024
cda300a
Fix bug of quantization.
fyabc Aug 30, 2024
da03a3f
Bugfix of type hints in qwen2_vl.py.
fyabc Aug 31, 2024
25fb189
reformat.
fyabc Aug 31, 2024
d01530d
Merge branch 'main' into add_qwen2_vl_new
ywang96 Sep 1, 2024
faebfe4
fix typo from resolving conflict
ywang96 Sep 1, 2024
e492e53
Merge branch 'refs/heads/main' into add_qwen2_vl_new
fyabc Sep 2, 2024
2e87db7
Bugfix in qwen2_vl.py.
fyabc Sep 2, 2024
39a1069
Adding xformers implementation
fyabc Sep 5, 2024
855c78b
Fix bug of attn_bias in xformers implementation
fyabc Sep 5, 2024
091983f
Fix bug in xformers implementation, and add backend check in vision a…
fyabc Sep 6, 2024
b406571
Merge branch 'refs/heads/main' into add_qwen2_vl_new
fyabc Sep 6, 2024
7739588
Bugfix in qwen2_vl.py.
fyabc Sep 6, 2024
5bab9ba
Bugfix in qwen2_vl.py.
fyabc Sep 6, 2024
4587346
reformat.
fyabc Sep 6, 2024
ffad79f
Refactor MRotaryEmbedding.
fyabc Sep 6, 2024
9e7a946
Merge branch 'refs/heads/main' into add_qwen2_vl_new
fyabc Sep 9, 2024
d527417
Add "video" into ModalityStr.
fyabc Sep 9, 2024
6f3116c
Add Qwen2-VL examples.
fyabc Sep 9, 2024
386f302
Optimizer Qwen2-VL input processor. Update document.
fyabc Sep 10, 2024
c64c217
Update model notes and requirements-common.txt.
fyabc Sep 10, 2024
6bdefd6
Update model notes.
fyabc Sep 10, 2024
33dd048
Skip loading model
DarkLight1337 Sep 11, 2024
369ce7d
Merge branch 'main' into add_qwen2_vl_new
DarkLight1337 Sep 11, 2024
282c66a
format
DarkLight1337 Sep 11, 2024
14ef94d
Increase `max_model_len` to fit the original image
DarkLight1337 Sep 11, 2024
09b7a4f
Merge branch 'main' into add_qwen2_vl_new
DarkLight1337 Sep 11, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Bugfix in qwen2_vl.py.
  • Loading branch information
fyabc committed Sep 6, 2024
commit 5bab9bae04196df8b45143d2defd8ac8e8524dbc
9 changes: 2 additions & 7 deletions vllm/model_executor/models/qwen2_vl.py
Original file line number Diff line number Diff line change
Expand Up @@ -206,14 +206,9 @@ def __init__(
# For Volta and Turing GPUs, use xformers instead.
device_available = current_platform.get_device_capability()[0] >= 8
if device_available:
if spec := importlib.util.find_spec("flash_attn"):
flash_attn = importlib.util.module_from_spec(spec)
flash_attn_available = hasattr(flash_attn,
"flash_attn_varlen_func")
else:
flash_attn_available = False
from transformers.utils import is_flash_attn_2_available

if flash_attn_available:
if is_flash_attn_2_available():
self._use_flash_attn = True
else:
logger.warning(
Expand Down
Loading