-
Notifications
You must be signed in to change notification settings - Fork 570
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updates in VisionLanguageCollator
and in coco_captions
#563
Conversation
… into xrdaukar/gx-llava-gcp
OPE-353 Make sure HuggingFace vision models are supported with lema platform
List of model architectures supported by HuggingFace that we could support. Model that were tested with this script:
|
coco_captions
dataset and in VisionLanguageCollator
VisionLanguageCollator
and in coco_captions
OPE-354 [bug] Make sure llava, blip-2 (and others) work with simple vision language datasets
For now, they work with a dummy template |
… into xrdaukar/gx-llava-gcp
images = [] | ||
text_inputs = [] | ||
for item in batch: | ||
for required_key in (_PIXEL_VALUES_KEY, _INPUT_IDS_KEY): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if these are necessarily always required -- a vision/language model can handle text only inputs (e.g. a follow-up to an answer) and image only inputs (e.g. captioning)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added TODO to reconsider this . Note that this PR just raises a better error message, it doesn't change validation condition
-- Update
coco_captions
to load image bytes fromexample["image"]["bytes"]
if available (i.e., prefer to useIMAGE_BINARY
overIMAGE_PATH
when possible)-- Raise more descriptive error messages if one of expected keys is missing in examples /items respectively
-- Add
minimal_multimodal_training.py
to VSCode launch config-- Misc minor clean-ups
Tested with local
coco_captions
dataset.Towards OPE-353
Fixes OPE-354