Skip to content

Fix NameError: name 'nn' is not defined when PyTorch < 2.4#43822

Open
veeceey wants to merge 3 commits intohuggingface:mainfrom
veeceey:fix/issue-43784-nn-not-defined
Open

Fix NameError: name 'nn' is not defined when PyTorch < 2.4#43822
veeceey wants to merge 3 commits intohuggingface:mainfrom
veeceey:fix/issue-43784-nn-not-defined

Conversation

@veeceey
Copy link

@veeceey veeceey commented Feb 7, 2026

What does this PR do?

Fixes #43784

When PyTorch < 2.4 is installed, transformers v5.x disables PyTorch by making is_torch_available() return False. This causes the conditional import of torch.nn as nn (line 42) to be skipped. However, nn.Module is used in type annotations throughout accelerate.py (function signatures at lines 62, 191, 527, 540, 597), which are evaluated at module import time, causing:

NameError: name 'nn' is not defined

The Fix

Add from __future__ import annotations (PEP 563) to make all annotations lazily evaluated as string literals rather than being evaluated at import time. This means nn.Module in function signatures becomes a string and is never resolved during import.

This follows the same pattern already used in sibling modules:

  • transformers/integrations/tensor_parallel.py (line 14)
  • transformers/integrations/fsdp.py

The runtime isinstance() calls using nn.Module and nn.Parameter (lines 745, 801) are unaffected because they are inside functions that are only called when PyTorch is available.

Reproduction

As reported in the issue, this can be reproduced with PyTorch < 2.4 (e.g., 2.2.1):

from sentence_transformers import SentenceTransformer
# NameError: name 'nn' is not defined

Before submitting

  • This PR fixes a bug (NameError on import with old PyTorch)
  • Follows existing patterns in the codebase (tensor_parallel.py, fsdp.py)
  • Minimal, single-line change with no side effects

@veeceey
Copy link
Author

veeceey commented Feb 8, 2026

Fixed the failing CI check.

The issue was that a previous commit removed the null check for TokenizersBackend before calling .from_pretrained(). When the tokenizers library is not available, TokenizersBackend is set to None (see line 50 of tokenization_auto.py), which would cause a TypeError when trying to call None.from_pretrained().

The fix restores the null check and properly falls back to tokenizer_class_from_name() when:

  1. TokenizersBackend is None (tokenizers library not installed)
  2. TokenizersBackend.from_pretrained() raises an exception

This ensures the code works correctly with or without the tokenizers library installed.

The unconditional import of `from safetensors.torch import save_file` at the module level causes import failures when PyTorch is not available, as `safetensors.torch` depends on PyTorch being installed.

This follows the pattern already used in other files:
- `testing_utils.py`: imports safetensors.torch inside `if is_torch_available()`
- `trainer_utils.py`: imports safetensors.torch inside `if is_torch_available()`

The runtime usage of `save_file` (line 495) is only called from functions that use torch.Tensor, so it's safe to make this import conditional.
@veeceey veeceey force-pushed the fix/issue-43784-nn-not-defined branch from b69100f to bba530f Compare February 8, 2026 13:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

NameError: name 'nn' is not defined when importing sentence-transformers with latest transformers

1 participant