Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
- `is_torchvision_available`, `is_timm_available`, `is_torchaudio_available`,
`is_torchao_available`, `is_accelerate_available` now return False when
torch is not installed, since all these packages require torch
- Add `@requires(backends=("torch",))` to `PI0Processor` (was missing,
causing the lazy module to crash on import without torch)
- Fix wrong availability guards: `is_vision_available` → `is_torchvision_available`
in pixtral processor, `is_torch_available` in smolvlm processor
- Wrap bare `import torch` / torchvision imports in `processing_sam3_video.py`
- Quote `torch.Tensor` in return type annotation of `tokenization_mistral_common.py`
- Wrap 66 `image_processing_pil_*.py` imports from torch-dependent counterparts
in try/except with ImagesKwargs fallbacks; quote `torch.Tensor` annotations
- Restore explicit `from transformers import *` check in CircleCI
`check_repository_consistency` job
Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
|
@vasqu could you review this? It needs to be fixed before we release — |
| if not is_torch_available(): | ||
| return False |
There was a problem hiding this comment.
Is this an infinite loop? :D Edit: no sorry, its ao
The PIL image processor changes are too fragile (break on make fix-repo). Keep only the core fixes: - is_torchvision/timm/torchaudio/torchao/accelerate_available() check torch - CircleCI explicit import check - tokenization_mistral_common.py torch.Tensor annotation - processing_sam3_video.py conditional torch imports - processing_pixtral.py/processing_smolvlm.py availability guard fixes - PI0Processor @requires decorator Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
|
[For maintainers] Suggested jobs to run (before merge) run-slow: aria, beit, bridgetower, conditional_detr, convnext, deepseek_vl, deepseek_vl_hybrid, deformable_detr, detr, donut, dpt, efficientloftr, efficientnet, eomt, ernie4_5_vl_moe, flava |
| "torch", | ||
| "torchvision", | ||
| ), | ||
| lambda name, content: name.startswith("image_processing_") and "pil" in name: ("vision",), |
There was a problem hiding this comment.
name.endswith("_fast"): now default don't
| @@ -345,6 +345,7 @@ | |||
| name for name in dir(dummy_torchvision_objects) if not name.startswith("_") | |||
| ] | |||
| else: | |||
There was a problem hiding this comment.
torch vision depends on torch
There was a problem hiding this comment.
second most important
| lambda name, content: "tokenization_" in name and name.endswith("_fast"): ("tokenizers",), | ||
| lambda name, content: "image_processing_" in name and "TorchvisionBackend" in content: ( | ||
| "vision", | ||
| "torch", | ||
| "torchvision", | ||
| ), | ||
| lambda name, content: name.startswith("image_processing_") and "TorchvisionBackend" in content: ( | ||
| "vision", | ||
| "torch", | ||
| "torchvision", | ||
| ), | ||
| lambda name, content: name.startswith("image_processing_"): ("vision",), | ||
| lambda name, content: name.startswith("video_processing_"): ("vision", "torch", "torchvision"), | ||
| lambda name, content: "image_processing_" in name: ("vision",), | ||
| lambda name, content: "video_processing_" in name: ("vision", "torch", "torchvision"), |
There was a problem hiding this comment.
most important changes @LysandreJik this is why it was failing, name is full path
.circleci/config.yml
Outdated
| - run: | ||
| name: "Test import with all backends (torch + PIL + torchvision)" | ||
| command: python -c "from transformers import *" || (echo '🚨 import failed with all backends. Fix unprotected imports!! 🚨'; exit 1) | ||
| - run: | ||
| name: "Test import with torch only (no PIL, no torchvision)" | ||
| command: | | ||
| uv pip uninstall Pillow torchvision -q | ||
| python -c "from transformers import *" || (echo '🚨 import failed with torch only (no PIL). Fix unprotected imports!! 🚨'; exit 1) | ||
| uv pip install -e ".[quality]" -q | ||
| - run: | ||
| name: "Test import with PIL only (no torch, no torchvision)" | ||
| command: | | ||
| uv pip uninstall torch torchvision torchaudio -q | ||
| python -c "from transformers import *" || (echo '🚨 import failed with PIL only (no torch). Fix unprotected imports!! 🚨'; exit 1) | ||
| uv pip install -e ".[quality]" -q | ||
| - run: | ||
| name: "Test import with torch + PIL, no torchvision" | ||
| command: | | ||
| uv pip uninstall torchvision -q | ||
| python -c "from transformers import *" || (echo '🚨 import failed with torch+PIL but no torchvision. Fix unprotected imports!! 🚨'; exit 1) | ||
| uv pip install -e ".[quality]" -q | ||
| - run: make check-repository-consistency |
There was a problem hiding this comment.
cc @tarekziade and @ydshieh we are missing a bunch of these to enforce our import * is true WRT to our hards deps only
There was a problem hiding this comment.
good catch, yeah the original check (moved to check-repository-consistency) only checked if transfomers is importable - we'll extend it
…age processors PR huggingface#45029 added @requires(backends=("vision", "torch", "torchvision")) to 67 PIL backend image_processing_pil_*.py files. This causes PIL backend classes to become dummy objects when torchvision is not installed, making AutoImageProcessor unable to find any working processor. Fix: set @requires to ("vision",) for files that only need PIL, and ("vision", "torch") for files that also use torch directly. Also fix 5 modular source files so make fix-repo preserves the correct backends. Fixes huggingface#45042 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* first part of the fix
* fix torch imports
* revert
* fix: make from transformers import * work without torch
- `is_torchvision_available`, `is_timm_available`, `is_torchaudio_available`,
`is_torchao_available`, `is_accelerate_available` now return False when
torch is not installed, since all these packages require torch
- Add `@requires(backends=("torch",))` to `PI0Processor` (was missing,
causing the lazy module to crash on import without torch)
- Fix wrong availability guards: `is_vision_available` → `is_torchvision_available`
in pixtral processor, `is_torch_available` in smolvlm processor
- Wrap bare `import torch` / torchvision imports in `processing_sam3_video.py`
- Quote `torch.Tensor` in return type annotation of `tokenization_mistral_common.py`
- Wrap 66 `image_processing_pil_*.py` imports from torch-dependent counterparts
in try/except with ImagesKwargs fallbacks; quote `torch.Tensor` annotations
- Restore explicit `from transformers import *` check in CircleCI
`check_repository_consistency` job
Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
* up
* style of this
* revert: remove src/models changes, keep only core import fixes
The PIL image processor changes are too fragile (break on make fix-repo).
Keep only the core fixes:
- is_torchvision/timm/torchaudio/torchao/accelerate_available() check torch
- CircleCI explicit import check
- tokenization_mistral_common.py torch.Tensor annotation
- processing_sam3_video.py conditional torch imports
- processing_pixtral.py/processing_smolvlm.py availability guard fixes
- PI0Processor @requires decorator
Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
* nit
* the mega quidproquo
* use rquires(backend
* more pil fixes
* fixes
* temp update
* up?
* is this it?
* style?
* revert a bunch of ai shit
* pi0 requires this
* revert some stuffs
* upd
* the fix
* yups
* ah
* up
* up
* fix
* yes?
* update
* up
* nits
* up
* up
* order
---------
Co-authored-by: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
…age processors (#45045) * [Bugfix] Remove incorrect torchvision requirement from PIL backend image processors PR #45029 added @requires(backends=("vision", "torch", "torchvision")) to 67 PIL backend image_processing_pil_*.py files. This causes PIL backend classes to become dummy objects when torchvision is not installed, making AutoImageProcessor unable to find any working processor. Fix: set @requires to ("vision",) for files that only need PIL, and ("vision", "torch") for files that also use torch directly. Also fix 5 modular source files so make fix-repo preserves the correct backends. Fixes #45042 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * [Bugfix] Remove redundant @requires(backends=("vision",)) from PIL backends Per reviewer feedback: the vision-only @requires decorator is redundant for PIL backend classes since PilBackend base class already handles this. - Remove @requires(backends=("vision",)) from 43 PIL backend files - Remove unused `requires` import from 38 files (Category A) - Keep @requires(backends=("vision", "torch")) on method-level decorators (Category B: 5 files) * update * remove torch when its not necessary * remove if typechecking * fix import shinanigans * marvellous that's how we protect torch :) * beit is torchvisionbackend * more import cleanup * fiixup * fix-repo * update * style * fixes * up * more * fix repo * up * update * fix imports * style * fix check copies * arf * converter up * fix? * fix copies * fix for func * style * ignore * type --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Arthur <arthur.zucker@gmail.com>
* first part of the fix
* fix torch imports
* revert
* fix: make from transformers import * work without torch
- `is_torchvision_available`, `is_timm_available`, `is_torchaudio_available`,
`is_torchao_available`, `is_accelerate_available` now return False when
torch is not installed, since all these packages require torch
- Add `@requires(backends=("torch",))` to `PI0Processor` (was missing,
causing the lazy module to crash on import without torch)
- Fix wrong availability guards: `is_vision_available` → `is_torchvision_available`
in pixtral processor, `is_torch_available` in smolvlm processor
- Wrap bare `import torch` / torchvision imports in `processing_sam3_video.py`
- Quote `torch.Tensor` in return type annotation of `tokenization_mistral_common.py`
- Wrap 66 `image_processing_pil_*.py` imports from torch-dependent counterparts
in try/except with ImagesKwargs fallbacks; quote `torch.Tensor` annotations
- Restore explicit `from transformers import *` check in CircleCI
`check_repository_consistency` job
Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
* up
* style of this
* revert: remove src/models changes, keep only core import fixes
The PIL image processor changes are too fragile (break on make fix-repo).
Keep only the core fixes:
- is_torchvision/timm/torchaudio/torchao/accelerate_available() check torch
- CircleCI explicit import check
- tokenization_mistral_common.py torch.Tensor annotation
- processing_sam3_video.py conditional torch imports
- processing_pixtral.py/processing_smolvlm.py availability guard fixes
- PI0Processor @requires decorator
Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
* nit
* the mega quidproquo
* use rquires(backend
* more pil fixes
* fixes
* temp update
* up?
* is this it?
* style?
* revert a bunch of ai shit
* pi0 requires this
* revert some stuffs
* upd
* the fix
* yups
* ah
* up
* up
* fix
* yes?
* update
* up
* nits
* up
* up
* order
---------
Co-authored-by: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
…age processors (huggingface#45045) * [Bugfix] Remove incorrect torchvision requirement from PIL backend image processors PR huggingface#45029 added @requires(backends=("vision", "torch", "torchvision")) to 67 PIL backend image_processing_pil_*.py files. This causes PIL backend classes to become dummy objects when torchvision is not installed, making AutoImageProcessor unable to find any working processor. Fix: set @requires to ("vision",) for files that only need PIL, and ("vision", "torch") for files that also use torch directly. Also fix 5 modular source files so make fix-repo preserves the correct backends. Fixes huggingface#45042 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * [Bugfix] Remove redundant @requires(backends=("vision",)) from PIL backends Per reviewer feedback: the vision-only @requires decorator is redundant for PIL backend classes since PilBackend base class already handles this. - Remove @requires(backends=("vision",)) from 43 PIL backend files - Remove unused `requires` import from 38 files (Category A) - Keep @requires(backends=("vision", "torch")) on method-level decorators (Category B: 5 files) * update * remove torch when its not necessary * remove if typechecking * fix import shinanigans * marvellous that's how we protect torch :) * beit is torchvisionbackend * more import cleanup * fiixup * fix-repo * update * style * fixes * up * more * fix repo * up * update * fix imports * style * fix check copies * arf * converter up * fix? * fix copies * fix for func * style * ignore * type --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Arthur <arthur.zucker@gmail.com>
…age processors (huggingface#45045) * [Bugfix] Remove incorrect torchvision requirement from PIL backend image processors PR huggingface#45029 added @requires(backends=("vision", "torch", "torchvision")) to 67 PIL backend image_processing_pil_*.py files. This causes PIL backend classes to become dummy objects when torchvision is not installed, making AutoImageProcessor unable to find any working processor. Fix: set @requires to ("vision",) for files that only need PIL, and ("vision", "torch") for files that also use torch directly. Also fix 5 modular source files so make fix-repo preserves the correct backends. Fixes huggingface#45042 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * [Bugfix] Remove redundant @requires(backends=("vision",)) from PIL backends Per reviewer feedback: the vision-only @requires decorator is redundant for PIL backend classes since PilBackend base class already handles this. - Remove @requires(backends=("vision",)) from 43 PIL backend files - Remove unused `requires` import from 38 files (Category A) - Keep @requires(backends=("vision", "torch")) on method-level decorators (Category B: 5 files) * update * remove torch when its not necessary * remove if typechecking * fix import shinanigans * marvellous that's how we protect torch :) * beit is torchvisionbackend * more import cleanup * fiixup * fix-repo * update * style * fixes * up * more * fix repo * up * update * fix imports * style * fix check copies * arf * converter up * fix? * fix copies * fix for func * style * ignore * type --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Arthur <arthur.zucker@gmail.com>
What does this PR do?
Release workflow is failing