-
Notifications
You must be signed in to change notification settings - Fork 30.1k
Open
Labels
Big Model InferenceProblems related to the Big Model Inference capabilities provided by AccelerateProblems related to the Big Model Inference capabilities provided by AccelerateGood Second IssueIssues that are more difficult to do than "Good First" issues - give it a try if you want!Issues that are more difficult to do than "Good First" issues - give it a try if you want!
Description
Feature request
Support for device_map = 'auto'
so that the VideoMAE models can be run with Int8 mixed precision. For reproducibility, here is what I get when I run the command in a collab notebook (w/ GPU) with accelerate and bitsandbytes installed:
from transformers import AutoModelForVideoClassification
model_name = 'MCG-NJU/videomae-base-finetuned-ssv2 #Example checkpoint
model = AutoModelForVideoClassification.from_pretrained(model_name,load_in_8bit=True,device_map='auto')
Which gives the following error message:
Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning.
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <cell line: 4>:4 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py:471 in │
│ from_pretrained │
│ │
│ 468 │ │ │ ) │
│ 469 │ │ elif type(config) in cls._model_mapping.keys(): │
│ 470 │ │ │ model_class = _get_model_class(config, cls._model_mapping) │
│ ❱ 471 │ │ │ return model_class.from_pretrained( │
│ 472 │ │ │ │ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, │
│ 473 │ │ │ ) │
│ 474 │ │ raise ValueError( │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2703 in from_pretrained │
│ │
│ 2700 │ │ │ ) │
│ 2701 │ │ │ │
│ 2702 │ │ │ if model._no_split_modules is None: │
│ ❱ 2703 │ │ │ │ raise ValueError(f"{model.__class__.__name__} does not support `device_m │
│ 2704 │ │ │ no_split_modules = model._no_split_modules │
│ 2705 │ │ │ if device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]: │
│ 2706 │ │ │ │ raise ValueError( │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: VideoMAEForVideoClassification does not support `device_map='auto'` yet.
Motivation
I saw a similar issue #22018 which got resolved really quickly. Hoping that this won't be a lot of work to incorperate into the VideoMAE models 🙂
Your contribution
Would prefer if someone more familiar with the repo did this instead (it doesn't appear to be much work if the update is like #22207 but I didn't understand what the change did and don't currently have time to study the codebase)
Metadata
Metadata
Assignees
Labels
Big Model InferenceProblems related to the Big Model Inference capabilities provided by AccelerateProblems related to the Big Model Inference capabilities provided by AccelerateGood Second IssueIssues that are more difficult to do than "Good First" issues - give it a try if you want!Issues that are more difficult to do than "Good First" issues - give it a try if you want!