Releases: huggingface/huggingface_hub
[v0.21.3] Hot-fix: ModelHubMixin pass config when `__init__` accepts `**kwargs`
More details in #2058.
Full Changelog: v0.21.2...v0.21.3
v0.21.2: hot-fix: [HfFileSystem] Fix glob with pattern without wildcards
See #2056. (+#2050 shipped as v0.21.1).
Full Changelog: v0.21.0...v0.21.2
v0.21.0: dataclasses everywhere, file-system, PyTorchModelHubMixin, serialization and more.
Discuss about the release in our Community Tab. Feedback welcome!! 🤗
🖇️ Dataclasses everywhere!
All objects returned by the HfApi
client are now dataclasses!
In the past, objects were either dataclasses, typed dictionaries, non-typed dictionaries and even basic classes. This is now all harmonized with the goal of improving developer experience.
Kudos goes to the community for the implementation and testing of all the harmonization process. Thanks again for the contributions!
- Use dataclasses for all objects returned by HfApi #1911 by @Ahmedniz1 in #1974
- Updating HfApi objects to use dataclass by @Ahmedniz1 in #1988
- Dataclasses for objects returned hf api by @NouamaneELGueddarii in #1993
💾 FileSystem
The HfFileSystem
class implements the fsspec
interface to allow loading and writing files with a filesystem-like interface. The interface is highly used by the datasets
library and this release will improve further the efficiency and robustness of the integration.
- Pass revision in path to AbstractBufferedFile init by @albertvillanova in #1948
- [HfFileSystem] Fix
rm
on branch by @lhoestq in #1957 - Retry fetching data on 502 error in
HfFileSystem
by @mariosasko in #1981 - Add HfFileSystemStreamFile by @lhoestq in #1967
- [HfFileSystem] Copy non lfs files by @lhoestq in #1996
- Add
HfFileSystem.url
method by @mariosasko in #2027
🧩 Pytorch Hub Mixin
The PyTorchModelHubMixin
class let's you upload ANY pytorch model to the Hub in a few lines of code. More precisely, it is a class that can be inherited in any nn.Module
class to add the from_pretrained
, save_pretrained
and push_to_hub
helpers to your class. It handles serialization and deserialization of weights and configs for you and enables download counts on the Hub.
With this release, we've fixed 2 pain points holding back users from using this lib:
- Configs are now better handled. The mixin automatically detects if the base class defines a config, saves it on the Hub and then injects it at load time, either as a dictionary or a dataclass depending on the base class's expectations.
- Weights are now saved as
.safetensors
files instead of pytorch pickles for safety reasons. Loading from previous pytorch pickles is still supported but we are moving toward completely deprecating them (in a mid to long term plan).
- Better config support in ModelHubMixin by @Wauplin in #2001
- Use safetensors by default for
PyTorchModelHubMixin
by @bmuskalla in #2033
✨ InferenceClient improvements
Audio-to-audio task is now supported by both by the InferenceClient
!
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> audio_output = client.audio_to_audio("audio.flac")
>>> for i, item in enumerate(audio_output):
>>> with open(f"output_{i}.flac", "wb") as f:
f.write(item["blob"])
- Added audio to audio in inference client by @Ahmedniz1 in #2020
Also fixed a few things:
- Fix intolerance for new field in TGI stream response: 'index' by @danielpcox in #2006
- Fix optional model in tabular tasks by @Wauplin in #2018
- Added best_of to non-TGI ignored parameters by @dopc in #1949
📤 Model serialization
With the aim of harmonizing repo structures and file serialization on the Hub, we added a new module serialization
with a first helper split_state_dict_into_shards
that takes a state dict and split it into shards. Code implementation is mostly taken from transformers
and aims to be reused by other libraries in the ecosystem. It seamlessly supports torch
, tensorflow
and numpy
weights, and can be easily extended to other frameworks.
This is a first step in the harmonization process and more loading/saving helpers will be added soon.
📚 Documentation
🌐 Translations
Community is actively getting the job done to translate the huggingface_hub
to other languages. We now have docs available in Simplified Chinese (here) and in French (here) to help democratize good machine learning!
- [i18n-CN] Translated some files to simplified Chinese #1915 by @2404589803 in #1916
- Update .github workflow to build cn docs on PRs by @Wauplin in #1931
- [i18n-FR] Translated files in french and reviewed them by @JibrilEl in #2024
Docs misc
- Document
base_model
in modelcard metadata by @Wauplin in #1936 - Update the documentation of add_collection_item by @FremyCompany in #1958
- Docs[i18n-en]: added pkgx as an installation method to the docs by @michaelessiet in #1955
- Added
hf_transfer
extra intosetup.py
anddocs/
by @jamesbraza in #1970 - Documenting CLI default for
download --repo-type
by @jamesbraza in #1986 - Update repository.md by @xmichaelmason in #2010
Docs fixes
- Fix URL in
get_safetensors_metadata
docstring by @Wauplin in #1951 - Fix grammar by @Anthonyg5005 in #2003
- Fix doc by @jordane95 in #2013
- typo fix by @Decryptu in #2035
🛠️ Misc improvements
Creating a commit with an invalid README will fail early instead of uploading all LFS files before failing to commit.
Added a revision_exists
helper, working similarly to repo_exists
and file_exists
:
>>> from huggingface_hub import revision_exists
>>> revision_exists("google/gemma-7b", "float16")
True
>>> revision_exists("google/gemma-7b", "not-a-revision")
False
InferenceClient.wait(...)
now raises an error if the endpoint is in a failed state.
Improved progress bar when downloading a file
Other stuff:
- added will not echo message to the login token message by @vtrenton in #1925
- Raise if repo is disabled by @Wauplin in #1965
- Fix timezone in datetime parsing by @Wauplin in #1982
- retry on any 5xx on upload by @Wauplin in #2026
💔 Breaking changes
- Classes
ModelFilter
andDatasetFilter
are deprecated when listing models and datasets in favor of a simpler API that lets you pass the parameters directly tolist_models
andlist_datasets
.
>>> from huggingface_hub import list_models, ModelFilter
# use
>>> list_models(language="zh")
# instead of
>>> list_models(filter=ModelFilter(language="zh"))
Cleaner, right? ModelFilter
and DatasetFilter
will still be supported until v0.24
release.
- In the inference client,
ModelStatus.compute_type
is not a string anymore but a dictionary with more detailed info...
0.20.3 hot-fix: Fix HfFolder login when env variable not set
This patch release fixes an issue when retrieving the locally saved token using huggingface_hub.HfFolder.get_token
. For the record, this is a "planned to be deprecated" method, in favor of huggingface_hub.get_token
which is more robust and versatile. The issue came from a breaking change introduced in #1895 meaning only 0.20.x
is affected.
For more details, please refer to #1966.
Full Changelog: v0.20.2...v0.20.3
0.20.2 hot-fix: Fix concurrency issues in google colab login
A concurrency issue when using userdata.get
to retrieve HF_TOKEN
token led to deadlocks when downloading files in parallel. This hot-fix release fixes this issue by using a global lock before trying to get the token from the secrets vault. More details in #1953.
Full Changelog: v0.20.1...v0.20.2
0.20.1: hot-fix Fix circular import
This hot-fix release fixes a circular import error happening when import login
or logout
helpers from huggingface_hub
.
Related PR: #1930
Full Changelog: v0.20.0...v0.20.1
v0.20.0: Authentication, speed, safetensors metadata, access requests and more.
(Discuss about the release in our Community Tab. Feedback welcome!! 🤗)
🔐 Authentication
Authentication has been greatly improved in Google Colab. The best way to authenticate in a Colab notebook is to define a HF_TOKEN
secret in your personal secrets. When a notebook tries to reach the Hub, a pop-up will ask you if you want to share the HF_TOKEN
secret with this notebook -as an opt-in mechanism. This way, no need to call huggingface_hub.login
and copy-paste your token anymore! 🔥🔥🔥
In addition to the Google Colab integration, the login guide has been revisited to focus on security. It is recommended to authenticate either using huggingface_hub.login
or the HF_TOKEN
environment variable, rather than passing a hardcoded token in your scripts. Check out the new guide here.
- Login/authentication enhancements by @Wauplin in #1895
- Catch
SecretNotFoundError
in google colab login by @Wauplin in #1912
🏎️ Faster HfFileSystem
HfFileSystem
is a pythonic fsspec-compatible file interface to the Hugging Face Hub. Implementation has been greatly improved to optimize fs.find
performances.
Here is a quick benchmark with the bigcode/the-stack-dedup dataset:
v0.19.4 | v0.20.0 | |
---|---|---|
hffs.find("datasets/bigcode/the-stack-dedup", detail=False) |
46.2s | 1.63s |
hffs.find("datasets/bigcode/the-stack-dedup", detail=True) |
47.3s | 24.2s |
- Faster
HfFileSystem.find
by @mariosasko in #1809 - Faster
HfFileSystem.glob
by @lhoestq in #1815 - Fix common path in
_ ls_tree
by @lhoestq in #1850 - Remove
maxdepth
param fromHfFileSystem.glob
by @mariosasko in #1875 - [HfFileSystem] Support quoted revisions in path by @lhoestq in #1888
- Deprecate
HfApi.list_files_info
by @mariosasko in #1910
🚪 Access requests API (gated repos)
Models and datasets can be gated to monitor who's accessing the data you are sharing. You can also filter access with a manual approval of the requests. Access requests can now be managed programmatically using HfApi
. This can be useful for example if you have advanced user request screening requirements (for advanced compliance requirements, etc) or if you want to condition access to a model based on completing a payment flow.
Check out this guide to learn more about gated repos.
>>> from huggingface_hub import list_pending_access_requests, accept_access_request
# List pending requests
>>> requests = list_pending_access_requests("meta-llama/Llama-2-7b")
>>> requests[0]
[
AccessRequest(
username='clem',
fullname='Clem 🤗',
email='***',
timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
status='pending',
fields=None,
),
...
]
# Accept Clem's request
>>> accept_access_request("meta-llama/Llama-2-7b", "clem")
🔍 Parse Safetensors metadata
Safetensors is a simple, fast and secured format to save tensors in a file. Its advantages makes it the preferred format to host weights on the Hub. Thanks to its specification, it is possible to parse the file metadata on-the-fly. HfApi
now provides get_safetensors_metadata
, an helper to get safetensors metadata from a repo.
# Parse repo with single weights file
>>> metadata = get_safetensors_metadata("bigscience/bloomz-560m")
>>> metadata
SafetensorsRepoMetadata(
metadata=None,
sharded=False,
weight_map={'h.0.input_layernorm.bias': 'model.safetensors', ...},
files_metadata={'model.safetensors': SafetensorsFileMetadata(...)}
)
>>> metadata.files_metadata["model.safetensors"].metadata
{'format': 'pt'}
Other improvements
List and filter collections
You can now list collections on the Hub. You can filter them to return only collection containing a given item, or created by a given author.
>>> collections = list_collections(item="models/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF", sort="trending", limit=5):
>>> for collection in collections:
... print(collection.slug)
teknium/quantized-models-6544690bb978e0b0f7328748
AmeerH/function-calling-65560a2565d7a6ef568527af
PostArchitekt/7bz-65479bb8c194936469697d8c
gnomealone/need-to-test-652007226c6ce4cdacf9c233
Crataco/favorite-7b-models-651944072b4fffcb41f8b568
- add list_collections endpoint, solves #1835 by @ceferisbarov in #1856
- fix list collections sort values by @Wauplin in #1867
- Warn about truncation when listing collections by @Wauplin in #1873
Respect .gitignore
upload_folder
now respect gitignore
files!
Previously it was possible to filter which files should be uploaded from a folder using the allow_patterns
and ignore_patterns
parameters. This can now automatically be done by simply creating a .gitignore
file in your repo.
- Respect
.gitignore
file in commits by @Wauplin in #1868 - Remove respect_gitignore parameter by @Wauplin in #1876
Robust uploads
Uploading LFS files has also gotten more robust with a retry mechanism if a transient error happen while uploading to S3.
Target language in InferenceClient.translation
InferenceClient.translation
now supports src_lang
/tgt_lang
for applicable models.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="fr_XX")
"Mon nom est Sarah Jessica Parker mais vous pouvez m'appeler Jessica"
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="es_XX")
'Mi nombre es Sarah Jessica Parker pero puedes llamarme Jessica'
- add language support to translation client, solves #1763 by @ceferisbarov in #1869
Support source in reported EvalResult
EvalResult
now support source_name
and source_link
to provide a custom source for a reported result.
🛠️ Misc
Fetch all pull requests refs with list_repo_refs
.
Filter discussion when listing them with get_repo_discussions
.
# List opened PR from "sanchit-gandhi" on model repo "openai/whisper-large-v3"
>>> from huggingface_hub import get_repo_discussions
>>> discussions = get_repo_discussions(
... repo_id="openai/whisper-large-v3",
... author="sanchit-gandhi",
... discussion_type="pull_request",
... discussion_status="open",
... )
- ✨ Add filters to HfApi.get_repo_discussions by @SBrandeis in #1845
New field createdAt
for ModelInfo
, DatasetInfo
and SpaceInfo
.
It's now possible to create an inference endpoint running on a custom docker image (typically: a TGI container).
# Start an Inference Endpoint running Zephyr-7b-beta on TGI
>>> from huggingface_hub import create_inference_endpoint
>>> endpoint = create_inference_endpoint(
... "aws-zephyr-7b-beta-0486",
... repository="HuggingFaceH4/zephyr-7b-beta",
... framework="pytorch",
... task="text-generation",
... accelerator="gpu",
... vendor="aws",
... region="us-east-1",
... type="protected",
... instance_size="medium",
... instance_type="g5.2xlarge",
... custom_image={
... "health_route": "/health",
... "env": {
... "MAX_BATCH_PREFILL_TOKENS": "2048",
... "MAX_INPUT_LENGTH": "1024",
... "MAX_TOTAL_TOKENS": "1512",
... "MODEL_ID": "/repository"
... },
... "url": "ghcr.io/huggingface/text-generation-inference:1.1.0",
... },
... )
Upload CLI: create branch when revision does not exist
🖥️ Environment variables
huggingface_hub.constants.HF_HOME
has been made a public constant (see reference).
Offline mode has gotten more consistent. If HF_HUB_OFFLINE
is set, any http call to the Hub will fail. The fallback mechanism is snapshot_download
has been refactored to be aligned with the hf_hub_download
workflow. If offline mode is activated (or a connection error happens) and the files are already in the cache, snapshot_download
returns the corresponding snapshot directory.
- Respect HF_HUB_OFFLINE for every http call by @Wauplin in #1899
- Improve
snapshot_download
offline mode by @Wauplin in #1913
DO_NOT_TRACK
environment variable is now respected to deactivate telemetry calls. This is similar to HF_HUB_DISABLE_TELEMETRY
but not specific to Hugging Face.
📚 Documentation
- Document more list repos behavior by @Wauplin in #1823
- [i18n-KO] 🌐 Translated
git_vs_http.md
to Korean by @heuristicwave in #1862
Doc fixes
v0.19.4 - Hot-fix: do not fail if pydantic install is corrupted
On Python3.8, it is fairly easy to get a corrupted install of pydantic (more specificially, pydantic 2.x cannot run if tensorflow is installed because of an incompatible requirement on typing_extensions
). Since pydantic
is an optional dependency of huggingface_hub
, we do not want to crash at huggingface_hub
import time if pydantic install is corrupted. However this was the case because of how imports are made in huggingface_hub
. This hot-fix releases fixes this bug. If pydantic is not correctly installed, we only raise a warning and continue as if it was not installed at all.
Related PR: #1829
Full Changelog: v0.19.3...v0.19.4
v0.19.3 - Hot-fix: pin `pydantic<2.0` on Python3.8
Hot-fix release after #1828.
In 0.19.0
we've loosen pydantic requirements to accept both 1.x and 2.x since huggingface_hub
is compatible with both. However, it started to cause issues when installing both huggingface_hub[inference]
and tensorflow
in a Python3.8 environment. The problem comes from the fact that on Python3.8, Pydantic>=2.x and tensorflow don't seem to be compatible. Tensorflow depends on
typing_extension<=4.5.0
while pydantic 2.x requires typing_extensions>=4.6
. This causes a ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'.
when importing huggingface_hub.
As a side note, tensorflow support for Python3.8 has been dropped since 2.14.0. Therefore this issue should affect less and less users over time.
Full Changelog: v0.19.2...v0.19.3
v0.19.2 - Patch: expose HF_HOME in constants
Not a hot-fix.
In #1786 (already release in 0.19.0
), we harmonized the environment variables in the HF ecosystem with the goal to propagate this harmonization to other HF libraries. In this work, we forgot to expose HF_HOME
as a constant value that can be reused, especially by transformers
or datasets
. This release fixes this (see #1825).
Full Changelog: v0.19.1...v0.19.2