Closed
Description
System Info
transformers
version: 4.32.0.dev0- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (cpu)
- Jax version: 0.4.14
- JaxLib version: 0.4.14
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
Who can help?
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examples
folder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction
Running
from transformers import AutoProcessor
AutoProcessor.from_pretrained('facebook/mms-300m')
Produces:
/usr/local/lib/python3.10/dist-packages/transformers/models/wav2vec2/processing_wav2vec2.py:53: FutureWarning: Loading a tokenizer inside Wav2Vec2Processor from a config that does not include a `tokenizer_class` attribute is deprecated and will be removed in v5. Please add `'tokenizer_class': 'Wav2Vec2CTCTokenizer'` attribute to either your `config.json` or `tokenizer_config.json` file to suppress this warning:
warnings.warn(
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/transformers/models/wav2vec2/processing_wav2vec2.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
50 try:
---> 51 return super().from_pretrained(pretrained_model_name_or_path, **kwargs)
52 except OSError:
7 frames
OSError: Can't load tokenizer for 'facebook/mms-300m'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'facebook/mms-300m' is the correct path to a directory containing all relevant files for a Wav2Vec2CTCTokenizer tokenizer.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, *init_inputs, **kwargs)
1828
1829 if all(full_file_name is None for full_file_name in resolved_vocab_files.values()):
-> 1830 raise EnvironmentError(
1831 f"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you were trying to load it from "
1832 "'https://huggingface.co/models', make sure you don't have a local directory with the same name. "
OSError: Can't load tokenizer for 'facebook/mms-300m'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'facebook/mms-300m' is the correct path to a directory containing all relevant files for a Wav2Vec2CTCTokenizer tokenizer.
Expected behavior
The correct processor should be loaded (Wav2Vec2FeatureExtractor
from the preprocessor_config.json).
The error message suggests the tokenizer is mandatory for all MMS models, which isn't necessarily the case (specifically for just loading the pretrained base models).
Metadata
Metadata
Assignees
Labels
No labels